Tracing API
The tracing API consists of three functions that set up observability and tag your LLM calls.
init()
Configures OpenTelemetry and installs auto-instrumentation for all detected LLM libraries. Call once at application startup, before any LLM calls.
import promptic_sdk
promptic_sdk.init(
api_key=None, # str | None — defaults to PROMPTIC_API_KEY env var
endpoint=None, # str | None — defaults to https://promptic.eu
auto_instrument=True, # bool — auto-detect and instrument LLM libraries
service_name=None, # str | None — sets the OpenTelemetry service name
)Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | str | None | PROMPTIC_API_KEY | API key for authentication. Falls back to env var, then config file. |
endpoint | str | None | https://promptic.eu | Promptic platform URL. Falls back to PROMPTIC_ENDPOINT env var. |
auto_instrument | bool | True | When True, automatically detects and instruments installed LLM libraries. |
service_name | str | None | None | Optional service name for the OpenTelemetry TracerProvider. |
Raises
ValueError— if no API key is found via argument, env var, or config file.
What it does
- Creates an OpenTelemetry
TracerProviderwith aBatchSpanProcessor - Configures OTLP/HTTP export to the Promptic platform
- If
auto_instrument=True, detects and instruments each of: OpenAI, Anthropic, Google Generative AI, Vertex AI, Bedrock, Mistral, Cohere, LangChain (+ LangGraph,create_agent, deepagents), OpenAI Agents SDK, Claude Agent SDK. All emit official OpenTelemetry GenAI semantic conventions (gen_ai.*). - Packages that are not installed are silently skipped; packages that are
installed but fail to initialize are logged at
WARNINGlevel.
ai_component()
Context manager that tags all LLM spans within its scope with an AI Component name. Optionally tags with a dataset and run for evaluation.
with promptic_sdk.ai_component(
name, # str — component name (e.g., "email-classifier")
*,
dataset=None, # str | None — dataset name for evaluation grouping
run=None, # str | None — run name within the dataset
):
...Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | required | AI Component name. The component is auto-created in Promptic on first trace. |
dataset | str | None | None | Tags traces with a dataset. Auto-creates the dataset on the platform. |
run | str | None | None | Tags traces with a run within the dataset. |
Example
# Basic — just tag with component
with promptic_sdk.ai_component("my-classifier"):
response = client.chat.completions.create(...)
# With dataset and run — for evaluation
with promptic_sdk.ai_component("my-agent", dataset="regression", run="v2"):
for query in queries:
agent.run(query)How it works
Sets contextvars.ContextVar values that are read by an OpenTelemetry SpanProcessor. Every span created within the context automatically gets promptic.ai_component, promptic.dataset, and promptic.run attributes.
dataset()
Context manager that tags spans with a dataset name. Can be used inside an ai_component block for more granular control.
with promptic_sdk.dataset(
name, # str — dataset name
):
...Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | required | Dataset name. Auto-creates the dataset on the platform. |
Example
with promptic_sdk.ai_component("support-agent"):
# Normal traffic — no dataset
agent.run(user_query)
# Tagged for evaluation
with promptic_sdk.dataset("nightly-eval"):
for query in test_queries:
agent.run(query)