Skip to content

Commit 9cf7891

Browse files
x
1 parent ea007b8 commit 9cf7891

19 files changed

+1084
-924
lines changed

autotune/introduction.mdx

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,16 +18,20 @@ ZeroEval Prompts gives you version control for prompts with a single function ca
1818

1919
<Steps>
2020
<Step title="Replace hardcoded prompts">
21-
Swap string literals for `ze.prompt()` calls. Your existing prompt text becomes the fallback content.
21+
Swap string literals for `ze.prompt()` calls. Your existing prompt text
22+
becomes the fallback content.
2223
</Step>
2324
<Step title="Versions are created automatically">
24-
Each unique prompt string creates a tracked version. Changes in your code produce new versions without any extra work.
25+
Each unique prompt string creates a tracked version. Changes in your code
26+
produce new versions without any extra work.
2527
</Step>
2628
<Step title="Completions are linked to versions">
27-
When your LLM integration fires, ZeroEval links each completion to the exact prompt version and model that produced it.
29+
When your LLM integration fires, ZeroEval links each completion to the exact
30+
prompt version and model that produced it.
2831
</Step>
2932
<Step title="Optimize from production data">
30-
Review completions, submit feedback, and generate improved prompt variants -- all from real traffic.
33+
Review completions, submit feedback, and generate improved prompt variants
34+
-- all from real traffic.
3135
</Step>
3236
</Steps>
3337

autotune/reference.mdx

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,10 @@ GET /v1/prompts/{prompt_slug}
2121

2222
Fetch the current version of a prompt by its slug.
2323

24-
| Query Parameter | Type | Default | Description |
25-
| --- | --- | --- | --- |
26-
| `version` | `int` || Fetch a specific version number |
27-
| `tag` | `string` | `"latest"` | Tag to fetch (`"production"`, `"latest"`, etc.) |
24+
| Query Parameter | Type | Default | Description |
25+
| --------------- | -------- | ---------- | ----------------------------------------------- |
26+
| `version` | `int` | | Fetch a specific version number |
27+
| `tag` | `string` | `"latest"` | Tag to fetch (`"production"`, `"latest"`, etc.) |
2828

2929
```bash
3030
curl https://api.zeroeval.com/v1/prompts/support-bot \
@@ -74,12 +74,12 @@ Create a prompt version if it doesn't already exist (idempotent by content hash)
7474

7575
**Request body:**
7676

77-
| Field | Type | Required | Description |
78-
| --- | --- | --- | --- |
79-
| `content` | `string` | Yes | Prompt content |
80-
| `content_hash` | `string` | No | SHA-256 hash (computed server-side if omitted) |
81-
| `model_id` | `string` | No | Model to bind to this version |
82-
| `metadata` | `object` | No | Additional metadata |
77+
| Field | Type | Required | Description |
78+
| -------------- | -------- | -------- | ---------------------------------------------- |
79+
| `content` | `string` | Yes | Prompt content |
80+
| `content_hash` | `string` | No | SHA-256 hash (computed server-side if omitted) |
81+
| `model_id` | `string` | No | Model to bind to this version |
82+
| `metadata` | `object` | No | Additional metadata |
8383

8484
```bash
8585
curl -X POST https://api.zeroeval.com/v1/tasks/support-bot/prompt/versions/ensure \
@@ -160,9 +160,9 @@ Pin a tag (e.g. `production`) to a specific version number. This is how you depl
160160

161161
**Request body:**
162162

163-
| Field | Type | Required | Description |
164-
| --- | --- | --- | --- |
165-
| `version` | `int` | Yes | Version number to pin |
163+
| Field | Type | Required | Description |
164+
| --------- | ----- | -------- | --------------------- |
165+
| `version` | `int` | Yes | Version number to pin |
166166

167167
```bash
168168
curl -X POST https://api.zeroeval.com/projects/$PROJECT_ID/prompts/support-bot/tags/production:pin \
@@ -235,8 +235,8 @@ Update the model bound to a version.
235235

236236
**Request body:**
237237

238-
| Field | Type | Description |
239-
| --- | --- | --- |
238+
| Field | Type | Description |
239+
| ---------- | -------- | ------------------------ |
240240
| `model_id` | `string` | Model identifier to bind |
241241

242242
---

autotune/sdks/python.mdx

Lines changed: 34 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,9 @@ response = client.chat.completions.create(
3838
That's it. Every call to `ze.prompt()` is tracked, versioned, and linked to the completions it produces. You'll see production traces at [ZeroEval → Prompts](https://app.zeroeval.com).
3939

4040
<Note>
41-
When you provide `content`, ZeroEval automatically uses the latest optimized version from your dashboard if one exists. The `content` parameter serves as a fallback for when no optimized versions are available yet.
41+
When you provide `content`, ZeroEval automatically uses the latest optimized
42+
version from your dashboard if one exists. The `content` parameter serves as a
43+
fallback for when no optimized versions are available yet.
4244
</Note>
4345

4446
## Version Control
@@ -105,34 +107,34 @@ print(prompt.model)
105107

106108
### Parameters
107109

108-
| Parameter | Type | Default | Description |
109-
| --- | --- | --- | --- |
110-
| `slug` | `str` || Prompt slug (e.g. `"support-triage"`) |
111-
| `version` | `int` | `None` | Fetch a specific version number |
112-
| `tag` | `str` | `"latest"` | Tag to fetch (`"production"`, `"latest"`, etc.) |
113-
| `fallback` | `str` | `None` | Content to use if the prompt is not found |
114-
| `variables` | `dict` | `None` | Template variables for `{{var}}` tokens |
115-
| `task_name` | `str` | `None` | Override the task name for tracing |
116-
| `render` | `bool` | `True` | Whether to render template variables |
117-
| `missing` | `str` | `"error"` | What to do with missing variables: `"error"` or `"ignore"` |
118-
| `use_cache` | `bool` | `True` | Use in-memory cache for repeated fetches |
119-
| `timeout` | `float` | `None` | Request timeout in seconds |
110+
| Parameter | Type | Default | Description |
111+
| ----------- | ------- | ---------- | ---------------------------------------------------------- |
112+
| `slug` | `str` | | Prompt slug (e.g. `"support-triage"`) |
113+
| `version` | `int` | `None` | Fetch a specific version number |
114+
| `tag` | `str` | `"latest"` | Tag to fetch (`"production"`, `"latest"`, etc.) |
115+
| `fallback` | `str` | `None` | Content to use if the prompt is not found |
116+
| `variables` | `dict` | `None` | Template variables for `{{var}}` tokens |
117+
| `task_name` | `str` | `None` | Override the task name for tracing |
118+
| `render` | `bool` | `True` | Whether to render template variables |
119+
| `missing` | `str` | `"error"` | What to do with missing variables: `"error"` or `"ignore"` |
120+
| `use_cache` | `bool` | `True` | Use in-memory cache for repeated fetches |
121+
| `timeout` | `float` | `None` | Request timeout in seconds |
120122

121123
### Return value
122124

123125
Returns a `Prompt` object with:
124126

125-
| Field | Type | Description |
126-
| --- | --- | --- |
127-
| `content` | `str` | The rendered prompt content |
128-
| `version` | `int` | Version number |
129-
| `version_id` | `str` | Version UUID |
130-
| `tag` | `str` | Tag this version was fetched from |
131-
| `is_latest` | `bool` | Whether this is the latest version |
132-
| `model` | `str` | Model bound to this version (if any) |
133-
| `metadata` | `dict` | Additional metadata |
134-
| `source` | `str` | `"api"` or `"fallback"` |
135-
| `content_hash` | `str` | SHA-256 hash of the content |
127+
| Field | Type | Description |
128+
| -------------- | ------ | ------------------------------------ |
129+
| `content` | `str` | The rendered prompt content |
130+
| `version` | `int` | Version number |
131+
| `version_id` | `str` | Version UUID |
132+
| `tag` | `str` | Tag this version was fetched from |
133+
| `is_latest` | `bool` | Whether this is the latest version |
134+
| `model` | `str` | Model bound to this version (if any) |
135+
| `metadata` | `dict` | Additional metadata |
136+
| `source` | `str` | `"api"` or `"fallback"` |
137+
| `content_hash` | `str` | SHA-256 hash of the content |
136138

137139
## Model Deployments
138140

@@ -166,11 +168,11 @@ ze.send_feedback(
166168
)
167169
```
168170

169-
| Parameter | Type | Required | Description |
170-
| --- | --- | --- | --- |
171-
| `prompt_slug` | `str` | Yes | Prompt name (same as used in `ze.prompt()`) |
172-
| `completion_id` | `str` | Yes | UUID of the completion |
173-
| `thumbs_up` | `bool` | Yes | Positive or negative feedback |
174-
| `reason` | `str` | No | Explanation of the feedback |
175-
| `expected_output` | `str` | No | What the output should have been |
176-
| `metadata` | `dict` | No | Additional metadata |
171+
| Parameter | Type | Required | Description |
172+
| ----------------- | ------ | -------- | ------------------------------------------- |
173+
| `prompt_slug` | `str` | Yes | Prompt name (same as used in `ze.prompt()`) |
174+
| `completion_id` | `str` | Yes | UUID of the completion |
175+
| `thumbs_up` | `bool` | Yes | Positive or negative feedback |
176+
| `reason` | `str` | No | Explanation of the feedback |
177+
| `expected_output` | `str` | No | What the output should have been |
178+
| `metadata` | `dict` | No | Additional metadata |

autotune/sdks/typescript.mdx

Lines changed: 46 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -14,31 +14,33 @@ npm install zeroeval
1414
Replace hardcoded prompt strings with `ze.prompt()`. Your existing text becomes the fallback content that's used until an optimized version is available.
1515

1616
```typescript
17-
import * as ze from 'zeroeval';
18-
import { OpenAI } from 'openai';
17+
import * as ze from "zeroeval";
18+
import { OpenAI } from "openai";
1919

2020
ze.init();
2121
const client = ze.wrap(new OpenAI());
2222

2323
const systemPrompt = await ze.prompt({
24-
name: 'support-bot',
25-
content: 'You are a helpful customer support agent for {{company}}.',
26-
variables: { company: 'TechCorp' },
24+
name: "support-bot",
25+
content: "You are a helpful customer support agent for {{company}}.",
26+
variables: { company: "TechCorp" },
2727
});
2828

2929
const response = await client.chat.completions.create({
30-
model: 'gpt-4',
30+
model: "gpt-4",
3131
messages: [
32-
{ role: 'system', content: systemPrompt },
33-
{ role: 'user', content: 'How do I reset my password?' },
32+
{ role: "system", content: systemPrompt },
33+
{ role: "user", content: "How do I reset my password?" },
3434
],
3535
});
3636
```
3737

3838
Every call to `ze.prompt()` is tracked, versioned, and linked to the completions it produces. You'll see production traces at [ZeroEval → Prompts](https://app.zeroeval.com).
3939

4040
<Note>
41-
When you provide `content`, ZeroEval automatically uses the latest optimized version from your dashboard if one exists. The `content` parameter serves as a fallback for when no optimized versions are available yet.
41+
When you provide `content`, ZeroEval automatically uses the latest optimized
42+
version from your dashboard if one exists. The `content` parameter serves as a
43+
fallback for when no optimized versions are available yet.
4244
</Note>
4345

4446
## Version Control
@@ -47,8 +49,8 @@ When you provide `content`, ZeroEval automatically uses the latest optimized ver
4749

4850
```typescript
4951
const prompt = await ze.prompt({
50-
name: 'customer-support',
51-
content: 'You are a helpful assistant.',
52+
name: "customer-support",
53+
content: "You are a helpful assistant.",
5254
});
5355
```
5456

@@ -58,9 +60,9 @@ Uses the latest optimized version if one exists, otherwise falls back to the pro
5860

5961
```typescript
6062
const prompt = await ze.prompt({
61-
name: 'customer-support',
62-
from: 'explicit',
63-
content: 'You are a helpful assistant.',
63+
name: "customer-support",
64+
from: "explicit",
65+
content: "You are a helpful assistant.",
6466
});
6567
```
6668

@@ -70,8 +72,8 @@ Always uses the provided content. Useful for debugging or A/B testing a specific
7072

7173
```typescript
7274
const prompt = await ze.prompt({
73-
name: 'customer-support',
74-
from: 'latest',
75+
name: "customer-support",
76+
from: "latest",
7577
});
7678
```
7779

@@ -81,47 +83,47 @@ Requires an optimized version to exist. Fails with `PromptRequestError` if none
8183

8284
```typescript
8385
const prompt = await ze.prompt({
84-
name: 'customer-support',
85-
from: 'a1b2c3d4...', // 64-char SHA-256 hash
86+
name: "customer-support",
87+
from: "a1b2c3d4...", // 64-char SHA-256 hash
8688
});
8789
```
8890

8991
## Parameters
9092

91-
| Parameter | Type | Required | Default | Description |
92-
| --- | --- | --- | --- | --- |
93-
| `name` | `string` | Yes || Task name for this prompt |
94-
| `content` | `string` | No | `undefined` | Prompt content (fallback or explicit) |
95-
| `from` | `string` | No | `undefined` | `"latest"`, `"explicit"`, or a 64-char SHA-256 hash |
96-
| `variables` | `Record<string, string>` | No | `undefined` | Template variables for `{{var}}` tokens |
93+
| Parameter | Type | Required | Default | Description |
94+
| ----------- | ------------------------ | -------- | ----------- | --------------------------------------------------- |
95+
| `name` | `string` | Yes | | Task name for this prompt |
96+
| `content` | `string` | No | `undefined` | Prompt content (fallback or explicit) |
97+
| `from` | `string` | No | `undefined` | `"latest"`, `"explicit"`, or a 64-char SHA-256 hash |
98+
| `variables` | `Record<string, string>` | No | `undefined` | Template variables for `{{var}}` tokens |
9799

98100
### Return value
99101

100102
Returns `Promise<string>` -- a decorated prompt string with metadata that integrations use to link completions to prompt versions and auto-patch models.
101103

102104
### Errors
103105

104-
| Error | When |
105-
| --- | --- |
106-
| `Error` | Both `content` and `from` provided (except `from: "explicit"`), or neither |
107-
| `PromptRequestError` | `from: "latest"` but no versions exist |
108-
| `PromptNotFoundError` | `from` is a hash that doesn't exist |
106+
| Error | When |
107+
| --------------------- | -------------------------------------------------------------------------- |
108+
| `Error` | Both `content` and `from` provided (except `from: "explicit"`), or neither |
109+
| `PromptRequestError` | `from: "latest"` but no versions exist |
110+
| `PromptNotFoundError` | `from` is a hash that doesn't exist |
109111

110112
## Model Deployments
111113

112114
When you deploy a model to a prompt version in the dashboard, the SDK automatically patches the `model` parameter in your LLM calls:
113115

114116
```typescript
115117
const systemPrompt = await ze.prompt({
116-
name: 'support-bot',
117-
content: 'You are a helpful customer support agent.',
118+
name: "support-bot",
119+
content: "You are a helpful customer support agent.",
118120
});
119121

120122
const response = await client.chat.completions.create({
121-
model: 'gpt-4', // Gets replaced with the deployed model
123+
model: "gpt-4", // Gets replaced with the deployed model
122124
messages: [
123-
{ role: 'system', content: systemPrompt },
124-
{ role: 'user', content: 'Hello' },
125+
{ role: "system", content: systemPrompt },
126+
{ role: "user", content: "Hello" },
125127
],
126128
});
127129
```
@@ -132,18 +134,18 @@ Attach feedback to completions to power prompt optimization:
132134

133135
```typescript
134136
await ze.sendFeedback({
135-
promptSlug: 'support-bot',
137+
promptSlug: "support-bot",
136138
completionId: response.id,
137139
thumbsUp: true,
138-
reason: 'Clear and concise response',
140+
reason: "Clear and concise response",
139141
});
140142
```
141143

142-
| Parameter | Type | Required | Description |
143-
| --- | --- | --- | --- |
144-
| `promptSlug` | `string` | Yes | Prompt name (same as used in `ze.prompt()`) |
145-
| `completionId` | `string` | Yes | UUID of the completion |
146-
| `thumbsUp` | `boolean` | Yes | Positive or negative feedback |
147-
| `reason` | `string` | No | Explanation of the feedback |
148-
| `expectedOutput` | `string` | No | What the output should have been |
149-
| `metadata` | `Record<string, unknown>` | No | Additional metadata |
144+
| Parameter | Type | Required | Description |
145+
| ---------------- | ------------------------- | -------- | ------------------------------------------- |
146+
| `promptSlug` | `string` | Yes | Prompt name (same as used in `ze.prompt()`) |
147+
| `completionId` | `string` | Yes | UUID of the completion |
148+
| `thumbsUp` | `boolean` | Yes | Positive or negative feedback |
149+
| `reason` | `string` | No | Explanation of the feedback |
150+
| `expectedOutput` | `string` | No | What the output should have been |
151+
| `metadata` | `Record<string, unknown>` | No | Additional metadata |

0 commit comments

Comments
 (0)