CLI
The promptic CLI mirrors the full Python API with human-readable table output. All list commands support --json for machine-readable output.
Authentication
# Interactive login (opens browser)
promptic login
# Save API key directly (CI/CD)
promptic configure --api-key "pk_live_..."
# Log out
promptic logoutWorkspace
promptic workspace info # Show current workspace
promptic workspace list # List all workspaces
promptic workspace select # Switch workspace (interactive)Components
promptic components list
promptic components create <name> [--description "..."]
promptic components get <component-id>
promptic components delete <component-id>Experiments
promptic experiments list [--component <id>] [--status pending|running|completed|failed]
promptic experiments create <component-id> <target-model> [options]
promptic experiments get <experiment-id>
promptic experiments update <experiment-id> [--name "..." --description "..."]
promptic experiments delete <experiment-id>
promptic experiments start <experiment-id>
# Clone an experiment (observations + evaluators) under the same component.
# By default starts from the source's initial prompt; use --initial-prompt to override.
promptic experiments duplicate <experiment-id> [--initial-prompt "..."] [--start]
# Clone an experiment and seed the new initial prompt from the source's
# best optimized iteration — useful to chain optimization runs.
promptic experiments continue <experiment-id> [--start]Create options
| Flag | Description | Default |
|---|---|---|
--task-type | classification, textGeneration, structuredOutput | classification |
--provider | openai, openrouter, custom, google | openai |
--optimizer | promptic, prompticV2, miproV2, bootstrapFewShot, gepa | prompticV2 |
--initial-prompt | Starting prompt text | — |
Observations
promptic observations list <experiment-id>
promptic observations add <experiment-id> --variables '{"message": "..."}' --expected "..."
promptic observations delete <experiment-id> <observation-id>Evaluators
promptic evaluators list <experiment-id>
promptic evaluators add <experiment-id> --name "..." --type f1|referenceJudge|comparisonJudge|generalJudge|similarity|structuredOutput [--weight 1.0]
promptic evaluators delete <experiment-id> <evaluator-id>Iterations
promptic iterations list <experiment-id>
promptic iterations get <experiment-id> <iteration-id>
promptic iterations best <experiment-id>Deployments
promptic deployments status <component-id>
promptic deployments deploy <component-id> --experiment <experiment-id>
promptic deployments prompt <component-id>
promptic deployments undeploy <component-id>Traces
promptic traces list [--status ok|error] [--limit 50] [--offset 0]
promptic traces get <trace-id>
promptic traces stats [--days-back 30]Datasets
promptic datasets create <component-id> <name> [--description "..."]
promptic datasets list <component-id>
promptic datasets get <component-id> <dataset-id>
promptic datasets delete <component-id> <dataset-id>Runs
promptic runs create <component-id> <dataset-id> [--name "..."]
promptic runs list <component-id>
promptic runs get <component-id> <run-id>
promptic runs delete <component-id> <run-id>Annotations
promptic annotations create <component-id> <run-id> <trace-db-id> [--rating positive|negative] [--comment "..."]
promptic annotations list <component-id> <run-id>
promptic annotations delete <component-id> <run-id> <annotation-id>Evaluations
promptic evaluations run <component-id> --dataset <dataset-id> [--run <run-id>] [--name "..."]
promptic evaluations list <component-id>
promptic evaluations get <component-id> <evaluation-id>Global options
All list commands support --json for JSON output:
promptic components list --json
promptic traces list --json | jq '.[] | .traceId'