PrompticPromptic

Tracing API

The tracing API consists of three functions that set up observability and tag your LLM calls.

init()

Configures OpenTelemetry and installs auto-instrumentation for all detected LLM libraries. Call once at application startup, before any LLM calls.

import promptic_sdk

promptic_sdk.init(
    api_key=None,           # str | None — defaults to PROMPTIC_API_KEY env var
    endpoint=None,          # str | None — defaults to https://promptic.eu
    auto_instrument=True,   # bool — auto-detect and instrument LLM libraries
    service_name=None,      # str | None — sets the OpenTelemetry service name
)

Parameters

ParameterTypeDefaultDescription
api_keystr | NonePROMPTIC_API_KEYAPI key for authentication. Falls back to env var, then config file.
endpointstr | Nonehttps://promptic.euPromptic platform URL. Falls back to PROMPTIC_ENDPOINT env var.
auto_instrumentboolTrueWhen True, automatically detects and instruments installed LLM libraries.
service_namestr | NoneNoneOptional service name for the OpenTelemetry TracerProvider.

Raises

  • ValueError — if no API key is found via argument, env var, or config file.

What it does

  1. Creates an OpenTelemetry TracerProvider with a BatchSpanProcessor
  2. Configures OTLP/HTTP export to the Promptic platform
  3. If auto_instrument=True, detects and instruments each of: OpenAI, Anthropic, Google Generative AI, Vertex AI, Bedrock, Mistral, Cohere, LangChain (+ LangGraph, create_agent, deepagents), OpenAI Agents SDK, Claude Agent SDK. All emit official OpenTelemetry GenAI semantic conventions (gen_ai.*).
  4. Packages that are not installed are silently skipped; packages that are installed but fail to initialize are logged at WARNING level.

ai_component()

Context manager that tags all LLM spans within its scope with an AI Component name. Optionally tags with a dataset and run for evaluation.

with promptic_sdk.ai_component(
    name,                   # str — component name (e.g., "email-classifier")
    *,
    dataset=None,           # str | None — dataset name for evaluation grouping
    run=None,               # str | None — run name within the dataset
):
    ...

Parameters

ParameterTypeDefaultDescription
namestrrequiredAI Component name. The component is auto-created in Promptic on first trace.
datasetstr | NoneNoneTags traces with a dataset. Auto-creates the dataset on the platform.
runstr | NoneNoneTags traces with a run within the dataset.

Example

# Basic — just tag with component
with promptic_sdk.ai_component("my-classifier"):
    response = client.chat.completions.create(...)

# With dataset and run — for evaluation
with promptic_sdk.ai_component("my-agent", dataset="regression", run="v2"):
    for query in queries:
        agent.run(query)

How it works

Sets contextvars.ContextVar values that are read by an OpenTelemetry SpanProcessor. Every span created within the context automatically gets promptic.ai_component, promptic.dataset, and promptic.run attributes.


dataset()

Context manager that tags spans with a dataset name. Can be used inside an ai_component block for more granular control.

with promptic_sdk.dataset(
    name,                   # str — dataset name
):
    ...

Parameters

ParameterTypeDefaultDescription
namestrrequiredDataset name. Auto-creates the dataset on the platform.

Example

with promptic_sdk.ai_component("support-agent"):
    # Normal traffic — no dataset
    agent.run(user_query)

    # Tagged for evaluation
    with promptic_sdk.dataset("nightly-eval"):
        for query in test_queries:
            agent.run(query)