Skip to content

Commit ea007b8

Browse files
x
1 parent 226ac97 commit ea007b8

24 files changed

+2282
-2564
lines changed

autotune/introduction.mdx

Lines changed: 25 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,43 @@
11
---
22
title: "Introduction"
3-
description: "Run evaluations on models and prompts to find the best variants for your agents"
3+
description: "Version, track, and optimize every prompt your agent uses"
44
---
55

6-
Prompt optimization is a different approach to the traditional evals experience. Instead of setting up complex eval pipelines, we simply ingest your production traces and let you optimize your prompts based on your feedback.
6+
Prompts are the instructions that drive your agent's behavior. Small changes in wording can dramatically affect output quality, but without tracking, you have no way to know which version works best -- or even which version is running in production.
7+
8+
ZeroEval Prompts gives you version control for prompts with a single function call. Every change is tracked, every completion is linked to the exact prompt version that produced it, and you can deploy optimized versions without touching code.
9+
10+
## Why track prompts
11+
12+
- **Version history** -- every prompt change creates a new version you can compare and roll back to
13+
- **Production visibility** -- see exactly which prompt version is running, how often it's called, and what it produces
14+
- **Feedback loop** -- attach thumbs-up/down feedback to completions, then use it to [optimize prompts](/autotune/prompts/prompts) and [evaluate models](/autotune/prompts/models)
15+
- **One-click deployments** -- push a winning prompt or model to production without redeploying your app
716

817
## How it works
918

1019
<Steps>
11-
<Step title="Instrument your code">
12-
Replace hardcoded prompts with `ze.prompt()` calls in Python or `ze.prompt({...})` in TypeScript
20+
<Step title="Replace hardcoded prompts">
21+
Swap string literals for `ze.prompt()` calls. Your existing prompt text becomes the fallback content.
1322
</Step>
14-
<Step title="Every change creates a version">
15-
Each time you modify your prompt content, a new version is automatically created and tracked
23+
<Step title="Versions are created automatically">
24+
Each unique prompt string creates a tracked version. Changes in your code produce new versions without any extra work.
1625
</Step>
17-
<Step title="Collect performance data">
18-
ZeroEval automatically tracks all LLM interactions and their outcomes
26+
<Step title="Completions are linked to versions">
27+
When your LLM integration fires, ZeroEval links each completion to the exact prompt version and model that produced it.
1928
</Step>
20-
<Step title="Tune and evaluate">
21-
Use the UI to run experiments, vote on outputs, and identify the best prompt/model combinations
22-
</Step>
23-
<Step title="One-click model deployments">
24-
Winning configurations are automatically deployed to your application without code changes
29+
<Step title="Optimize from production data">
30+
Review completions, submit feedback, and generate improved prompt variants -- all from real traffic.
2531
</Step>
2632
</Steps>
2733

34+
## Get started
35+
2836
<CardGroup cols={2}>
29-
<Card title="Setup Guide" icon="wrench" href="/autotune/setup">
30-
Learn how to integrate ze.prompt() into your Python or TypeScript codebase
37+
<Card title="Python" icon="python" href="/autotune/sdks/python">
38+
`ze.prompt()` and `ze.get_prompt()` for Python applications
3139
</Card>
32-
<Card title="Prompts Guide" icon="sliders" href="/autotune/prompts">
33-
Run experiments and deploy winning combinations
40+
<Card title="TypeScript" icon="js" href="/autotune/sdks/typescript">
41+
`ze.prompt()` for TypeScript and JavaScript applications
3442
</Card>
3543
</CardGroup>
36-

autotune/prompts/models.mdx

Lines changed: 0 additions & 10 deletions
This file was deleted.

autotune/reference.mdx

Lines changed: 221 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -1,89 +1,250 @@
11
---
2-
title: "Reference"
3-
description: "Parameters and configuration for ze.prompt"
2+
title: "API Reference"
3+
description: "REST API for managing prompts, versions, and deployments"
44
---
55

6-
`ze.prompt` creates or fetches versioned prompts from the Prompt Library and returns decorated content for downstream LLM calls.
6+
Base URL: `https://api.zeroeval.com`
77

8-
<Info>
9-
**TypeScript differences**: In TypeScript, `ze.prompt()` is an async function that returns `Promise<string>`. Parameters use camelCase and are passed as an options object: `ze.prompt({ name: "...", content: "..." })`.
10-
</Info>
8+
All requests require a Bearer token:
119

12-
## Parameters
10+
```
11+
Authorization: Bearer YOUR_ZEROEVAL_API_KEY
12+
```
1313

14-
| Python | TypeScript | Type | Required | Default | Description |
15-
| --- | --- | --- | --- | --- | --- |
16-
| `name` | `name` | string | yes || Task name associated with the prompt in the library |
17-
| `content` | `content` | string | no | `None`/`undefined` | Raw prompt content to ensure/create a version by content |
18-
| `from_` | `from` | string | no | `None`/`undefined` | Either `"latest"`, `"explicit"`, or a 64‑char SHA‑256 hash |
19-
| `variables` | `variables` | dict/object | no | `None`/`undefined` | Template variables to render `{{variable}}` tokens |
14+
---
2015

21-
Notes:
16+
## Get Prompt
2217

23-
- In Python, use `from_` (with underscore) as `from` is a reserved keyword. TypeScript uses `from` directly.
24-
- Exactly one of `content` or `from` must be provided (except when using `from: "explicit"` with `content`).
25-
- `from="latest"` fetches the latest version bound to the task; otherwise `from` must be a 64‑char hex SHA‑256 hash.
18+
```
19+
GET /v1/prompts/{prompt_slug}
20+
```
2621

27-
## Behavior
22+
Fetch the current version of a prompt by its slug.
2823

29-
- **content provided**: Computes a normalized SHA‑256 hash, ensures a prompt version exists for `name`, and returns decorated content.
30-
- **from="latest"**: Fetches the latest version for `name` and returns decorated content.
31-
- **from=**`<hash>`: Fetches by content hash for `name` and returns decorated content.
24+
| Query Parameter | Type | Default | Description |
25+
| --- | --- | --- | --- |
26+
| `version` | `int` || Fetch a specific version number |
27+
| `tag` | `string` | `"latest"` | Tag to fetch (`"production"`, `"latest"`, etc.) |
3228

33-
Decoration adds a compact metadata header used by integrations:
29+
```bash
30+
curl https://api.zeroeval.com/v1/prompts/support-bot \
31+
-H "Authorization: Bearer $ZEROEVAL_API_KEY"
32+
```
3433

35-
- `task`, `prompt_slug`, `prompt_version`, `prompt_version_id`, `variables`, and (when created by content) `content_hash`.
34+
**Response:** 200
35+
36+
```json
37+
{
38+
"id": "a1b2c3d4-...",
39+
"prompt_id": "b2c3d4e5-...",
40+
"content": "You are a helpful customer support agent.",
41+
"content_hash": "e3b0c44298fc...",
42+
"version": 3,
43+
"model_id": "gpt-4o",
44+
"tag": "production",
45+
"is_latest": true,
46+
"metadata": {},
47+
"created_at": "2025-01-15T10:30:00Z"
48+
}
49+
```
3650

37-
OpenAI integration: when `prompt_version_id` is present, the SDK will automatically patch the `model` parameter to the model bound to that prompt version.
51+
### Fetch by tag
3852

39-
## Return Value
53+
```bash
54+
curl "https://api.zeroeval.com/v1/prompts/support-bot?tag=production" \
55+
-H "Authorization: Bearer $ZEROEVAL_API_KEY"
56+
```
4057

41-
- **Python**: `str` - Decorated prompt content ready to pass into LLM clients.
42-
- **TypeScript**: `Promise<string>` - Async function returning decorated prompt content.
58+
### Fetch by version number
4359

44-
## Errors
60+
```bash
61+
curl "https://api.zeroeval.com/v1/prompts/support-bot?version=2" \
62+
-H "Authorization: Bearer $ZEROEVAL_API_KEY"
63+
```
4564

46-
| Python | TypeScript | When |
47-
| --- | --- | --- |
48-
| `ValueError` | `Error` | Both `content` and `from` provided (except explicit), or neither; invalid `from` value |
49-
| `PromptRequestError` | `PromptRequestError` | `from="latest"` but no versions exist for `name` |
50-
| `PromptNotFoundError` | `PromptNotFoundError` | `from` is a hash that does not exist for `name` |
65+
---
66+
67+
## Ensure Prompt Version
68+
69+
```
70+
POST /v1/tasks/{task_name}/prompt/versions/ensure
71+
```
72+
73+
Create a prompt version if it doesn't already exist (idempotent by content hash). This is what `ze.prompt()` calls under the hood.
74+
75+
**Request body:**
76+
77+
| Field | Type | Required | Description |
78+
| --- | --- | --- | --- |
79+
| `content` | `string` | Yes | Prompt content |
80+
| `content_hash` | `string` | No | SHA-256 hash (computed server-side if omitted) |
81+
| `model_id` | `string` | No | Model to bind to this version |
82+
| `metadata` | `object` | No | Additional metadata |
83+
84+
```bash
85+
curl -X POST https://api.zeroeval.com/v1/tasks/support-bot/prompt/versions/ensure \
86+
-H "Authorization: Bearer $ZEROEVAL_API_KEY" \
87+
-H "Content-Type: application/json" \
88+
-d '{
89+
"content": "You are a helpful customer support agent for {{company}}."
90+
}'
91+
```
92+
93+
**Response:** 200
94+
95+
```json
96+
{
97+
"id": "c3d4e5f6-...",
98+
"content": "You are a helpful customer support agent for {{company}}.",
99+
"content_hash": "a1b2c3d4...",
100+
"version": 1,
101+
"model_id": null,
102+
"created_at": "2025-01-15T10:30:00Z"
103+
}
104+
```
105+
106+
---
51107

52-
## Examples
108+
## Get Version by Hash
53109

54-
<CodeGroup>
55-
```python Python
56-
import zeroeval as ze
110+
```
111+
GET /v1/tasks/{task_name}/prompt/versions/by-hash/{content_hash}
112+
```
57113

58-
# Create/ensure a version by content
59-
system = ze.prompt(
60-
name="support-triage",
61-
content="You are a helpful assistant for {{product}}.",
62-
variables={"product": "Acme"},
63-
)
114+
Fetch a specific prompt version by its SHA-256 content hash.
64115

65-
# Fetch the latest version for this task
66-
system = ze.prompt(name="support-triage", from_="latest")
116+
**Response:** 200 (same schema as ensure)
67117

68-
# Fetch a specific version by content hash
69-
system = ze.prompt(name="support-triage", from_="c6a7...deadbeef...0123")
118+
---
119+
120+
## Get Latest Version
121+
122+
```
123+
GET /v1/tasks/{task_name}/prompt/latest
70124
```
71-
```typescript TypeScript
72-
import * as ze from 'zeroeval';
73125

74-
// Create/ensure a version by content
75-
const system = await ze.prompt({
76-
name: "support-triage",
77-
content: "You are a helpful assistant for {{product}}.",
78-
variables: { product: "Acme" },
79-
});
126+
Fetch the latest prompt version for a task.
80127

81-
// Fetch the latest version for this task
82-
const system = await ze.prompt({ name: "support-triage", from: "latest" });
128+
**Response:** 200 (same schema as ensure)
129+
130+
---
83131

84-
// Fetch a specific version by content hash
85-
const system = await ze.prompt({ name: "support-triage", from: "c6a7...deadbeef...0123" });
132+
## Resolve Model for Version
133+
134+
```
135+
GET /v1/prompt-versions/{version_id}/model
136+
```
137+
138+
Get the model bound to a specific prompt version. Used by SDK integrations to auto-patch the `model` parameter.
139+
140+
**Response:** 200
141+
142+
```json
143+
{
144+
"model_id": "gpt-4o",
145+
"provider": "openai"
146+
}
86147
```
87-
</CodeGroup>
88148

149+
Returns `null` for `model_id` if no model is bound.
150+
151+
---
152+
153+
## Deploy a Version (Pin Tag)
154+
155+
```
156+
POST /projects/{project_id}/prompts/{prompt_slug}/tags/{tag}:pin
157+
```
158+
159+
Pin a tag (e.g. `production`) to a specific version number. This is how you deploy a prompt version to production.
160+
161+
**Request body:**
162+
163+
| Field | Type | Required | Description |
164+
| --- | --- | --- | --- |
165+
| `version` | `int` | Yes | Version number to pin |
166+
167+
```bash
168+
curl -X POST https://api.zeroeval.com/projects/$PROJECT_ID/prompts/support-bot/tags/production:pin \
169+
-H "Authorization: Bearer $ZEROEVAL_API_KEY" \
170+
-H "Content-Type: application/json" \
171+
-d '{"version": 3}'
172+
```
173+
174+
---
175+
176+
## List Versions
177+
178+
```
179+
GET /projects/{project_id}/prompts/{prompt_slug}/versions
180+
```
181+
182+
List all versions of a prompt.
183+
184+
**Response:** 200
185+
186+
```json
187+
[
188+
{
189+
"id": "c3d4e5f6-...",
190+
"content": "You are a helpful assistant.",
191+
"content_hash": "a1b2c3d4...",
192+
"version": 1,
193+
"model_id": null,
194+
"created_at": "2025-01-10T10:00:00Z"
195+
},
196+
{
197+
"id": "d4e5f6a7-...",
198+
"content": "You are a helpful customer support agent.",
199+
"content_hash": "b2c3d4e5...",
200+
"version": 2,
201+
"model_id": "gpt-4o",
202+
"created_at": "2025-01-15T10:30:00Z"
203+
}
204+
]
205+
```
206+
207+
---
208+
209+
## List Tags
210+
211+
```
212+
GET /projects/{project_id}/prompts/{prompt_slug}/tags
213+
```
214+
215+
List all tags and which version each is pinned to.
216+
217+
**Response:** 200
218+
219+
```json
220+
[
221+
{ "tag": "latest", "version": 2 },
222+
{ "tag": "production", "version": 1 }
223+
]
224+
```
225+
226+
---
227+
228+
## Update Version Model
229+
230+
```
231+
PATCH /projects/{project_id}/prompts/{prompt_slug}/versions/{version}
232+
```
233+
234+
Update the model bound to a version.
235+
236+
**Request body:**
237+
238+
| Field | Type | Description |
239+
| --- | --- | --- |
240+
| `model_id` | `string` | Model identifier to bind |
241+
242+
---
243+
244+
## Submit Completion Feedback
245+
246+
```
247+
POST /v1/prompts/{prompt_slug}/completions/{completion_id}/feedback
248+
```
89249

250+
See [Feedback API Reference](/feedback/api-reference#completion-feedback) for the full specification.

0 commit comments

Comments
 (0)