Quickstart
This guide gets you from zero to seeing LLM traces in your Promptic dashboard.
1. Install the SDK
pip install promptic-sdk[openai]Replace [openai] with the extra that matches your stack (e.g. [anthropic],
[bedrock], [vertexai], [mistralai], [langchain], [openai-agents],
[claude-agent]), or use [all] to install every supported instrumentor. See
Installation for the full list.
2. Get your API key
Go to your workspace settings and create a new API key. Set it as an environment variable:
export PROMPTIC_API_KEY="pk_live_..."3. Add two lines to your code
import promptic_sdk
promptic_sdk.init()Call init() once at startup, before any LLM calls. That's it — all LLM calls are now automatically traced.
4. Tag traces with a component
Wrap your LLM calls in an ai_component context to organize traces:
import promptic_sdk
from openai import OpenAI
promptic_sdk.init()
client = OpenAI()
with promptic_sdk.ai_component("my-classifier"):
response = client.chat.completions.create(
model="gpt-4.1-nano",
messages=[{"role": "user", "content": "Is this email spam? ..."}],
)
print(response.choices[0].message.content)5. View your traces
Open the Promptic dashboard. Your traces appear under the component you specified, showing the full request/response, token counts, cost, and latency.
What's next?
Now that tracing is set up, explore what you can do:
- Tracing guide — Datasets, runs, multi-provider setup, and advanced configuration
- Prompt Optimization — Automatically find the best prompt for your task
- Agent Evaluation — Evaluate your AI agents with structured test datasets
- Deployment — Deploy optimized prompts and fetch them at runtime