Skip to content

Commit 9d8c183

Browse files
updates and fixes
1 parent e2ede4f commit 9d8c183

File tree

8 files changed

+1128
-42
lines changed

8 files changed

+1128
-42
lines changed

autotune/introduction.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Prompt optimization is a different approach to the traditional evals experience.
99

1010
<Steps>
1111
<Step title="Instrument your code">
12-
Replace hardcoded prompts with `ze.prompt()` calls
12+
Replace hardcoded prompts with `ze.prompt()` calls in Python or `ze.prompt({...})` in TypeScript
1313
</Step>
1414
<Step title="Every change creates a version">
1515
Each time you modify your prompt content, a new version is automatically created and tracked
@@ -27,7 +27,7 @@ Prompt optimization is a different approach to the traditional evals experience.
2727

2828
<CardGroup cols={2}>
2929
<Card title="Setup Guide" icon="wrench" href="/autotune/setup">
30-
Learn how to integrate ze.prompt() into your codebase
30+
Learn how to integrate ze.prompt() into your Python or TypeScript codebase
3131
</Card>
3232
<Card title="Prompts Guide" icon="sliders" href="/autotune/prompts">
3333
Run experiments and deploy winning combinations

autotune/prompts/prompts.mdx

Lines changed: 81 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,10 @@ The easiest way to provide feedback is through the ZeroEval dashboard. Navigate
1717

1818
### Feedback through the SDK
1919

20-
For programmatic feedback submission, use the Python SDK. This is useful when you have automated evaluation systems or want to collect feedback from your application in production.
20+
For programmatic feedback submission, use the Python or TypeScript SDK. This is useful when you have automated evaluation systems or want to collect feedback from your application in production.
2121

22-
```python
22+
<CodeGroup>
23+
```python Python
2324
import zeroeval as ze
2425

2526
ze.init()
@@ -33,25 +34,41 @@ ze.send_feedback(
3334
expected_output="A concise 2-3 sentence response"
3435
)
3536
```
37+
```typescript TypeScript
38+
import * as ze from 'zeroeval';
39+
40+
ze.init();
41+
42+
// Send feedback for a specific completion
43+
await ze.sendFeedback({
44+
promptSlug: "support-bot",
45+
completionId: "550e8400-e29b-41d4-a716-446655440000",
46+
thumbsUp: false,
47+
reason: "Response was too verbose",
48+
expectedOutput: "A concise 2-3 sentence response"
49+
});
50+
```
51+
</CodeGroup>
3652

3753
#### Parameters
3854

39-
| Parameter | Type | Required | Description |
40-
| --- | --- | --- | --- |
41-
| `prompt_slug` | `str` | Yes | The slug/name of your prompt (same as used in `ze.prompt()`) |
42-
| `completion_id` | `str` | Yes | The UUID of the completion to provide feedback on |
43-
| `thumbs_up` | `bool` | Yes | `True` for positive feedback, `False` for negative feedback |
44-
| `reason` | `str` | No | Optional explanation of why you gave this feedback |
45-
| `expected_output` | `str` | No | Optional description of what the expected output should be |
46-
| `metadata` | `dict` | No | Optional additional metadata to attach to the feedback |
55+
| Python | TypeScript | Type | Required | Description |
56+
| --- | --- | --- | --- | --- |
57+
| `prompt_slug` | `promptSlug` | `str`/`string` | Yes | The slug/name of your prompt (same as used in `ze.prompt()`) |
58+
| `completion_id` | `completionId` | `str`/`string` | Yes | The UUID of the completion to provide feedback on |
59+
| `thumbs_up` | `thumbsUp` | `bool`/`boolean` | Yes | `True`/`true` for positive, `False`/`false` for negative |
60+
| `reason` | `reason` | `str`/`string` | No | Optional explanation of why you gave this feedback |
61+
| `expected_output` | `expectedOutput` | `str`/`string` | No | Optional description of what the expected output should be |
62+
| `metadata` | `metadata` | `dict`/`object` | No | Optional additional metadata to attach to the feedback |
4763

4864
<Note>
4965
The `completion_id` is automatically tracked when you use `ze.prompt()` with automatic tracing enabled. You can access it from the OpenAI response object's `id` field, or retrieve it from your traces in the dashboard.
5066
</Note>
5167

5268
#### Complete example with feedback
5369

54-
```python
70+
<CodeGroup>
71+
```python Python
5572
import zeroeval as ze
5673
from openai import OpenAI
5774

@@ -90,19 +107,70 @@ ze.send_feedback(
90107
expected_output=None if is_good_response else "Should include direct link: https://app.example.com/reset"
91108
)
92109
```
110+
```typescript TypeScript
111+
import * as ze from 'zeroeval';
112+
import { OpenAI } from 'openai';
113+
114+
ze.init();
115+
const client = ze.wrap(new OpenAI());
116+
117+
// Define your prompt - ZeroEval will automatically use the latest optimized
118+
// version from your dashboard if one exists, falling back to this content
119+
const systemPrompt = await ze.prompt({
120+
name: "support-bot",
121+
content: "You are a helpful customer support agent."
122+
});
123+
124+
// Make a completion
125+
const response = await client.chat.completions.create({
126+
model: "gpt-4",
127+
messages: [
128+
{ role: "system", content: systemPrompt },
129+
{ role: "user", content: "How do I reset my password?" }
130+
]
131+
});
132+
133+
// Get the completion ID and text
134+
const completionId = response.id;
135+
const completionText = response.choices[0].message.content;
136+
137+
// Evaluate the response (manually or automatically)
138+
const isGoodResponse = evaluateResponse(completionText);
139+
140+
// Send feedback based on evaluation
141+
await ze.sendFeedback({
142+
promptSlug: "support-bot",
143+
completionId: completionId,
144+
thumbsUp: isGoodResponse,
145+
reason: isGoodResponse ? "Clear step-by-step instructions" : "Missing link to reset page",
146+
expectedOutput: isGoodResponse ? undefined : "Should include direct link: https://app.example.com/reset"
147+
});
148+
```
149+
</CodeGroup>
93150

94151
<Note>
95152
**Auto-optimization**: When you use `ze.prompt()` with `content`, ZeroEval automatically fetches the latest optimized version from your dashboard if one exists. Your `content` serves as a fallback for initial setup. This means your prompts improve automatically as you tune them, without any code changes.
96153

97-
If you need to test the hardcoded content specifically (e.g., for debugging or A/B testing), use `from_="explicit"`:
98-
```python
154+
If you need to test the hardcoded content specifically (e.g., for debugging or A/B testing), use `from_="explicit"` (Python) or `from: "explicit"` (TypeScript):
155+
156+
<CodeGroup>
157+
```python Python
99158
# Bypass auto-optimization and always use this exact content
100159
prompt = ze.prompt(
101160
name="support-bot",
102161
from_="explicit",
103162
content="You are a helpful customer support agent."
104163
)
105164
```
165+
```typescript TypeScript
166+
// Bypass auto-optimization and always use this exact content
167+
const prompt = await ze.prompt({
168+
name: "support-bot",
169+
from: "explicit",
170+
content: "You are a helpful customer support agent."
171+
});
172+
```
173+
</CodeGroup>
106174
</Note>
107175

108176
### Feedback through the API

autotune/reference.mdx

Lines changed: 39 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -5,20 +5,24 @@ description: "Parameters and configuration for ze.prompt"
55

66
`ze.prompt` creates or fetches versioned prompts from the Prompt Library and returns decorated content for downstream LLM calls.
77

8+
<Info>
9+
**TypeScript differences**: In TypeScript, `ze.prompt()` is an async function that returns `Promise<string>`. Parameters use camelCase and are passed as an options object: `ze.prompt({ name: "...", content: "..." })`.
10+
</Info>
11+
812
## Parameters
913

10-
| Parameter | Type | Required | Default | Description |
11-
| ----------- | --------------- | -------- | ------- | ----------- |
12-
| `name` | string | yes || Task name associated with the prompt in the library |
13-
| `content` | string | no | `None` | Raw prompt content to ensure/create a version by content |
14-
| `from_` | string | no | `None` | Either `"latest"` or a 64‑char lowercase SHA‑256 content hash to fetch a specific version |
15-
| `from` | string (alias) | no | `None` | Alias for `from_` (keyword‑only) |
16-
| `variables` | dict | no | `None` | Template variables to render `{{variable}}` tokens in content |
14+
| Python | TypeScript | Type | Required | Default | Description |
15+
| --- | --- | --- | --- | --- | --- |
16+
| `name` | `name` | string | yes || Task name associated with the prompt in the library |
17+
| `content` | `content` | string | no | `None`/`undefined` | Raw prompt content to ensure/create a version by content |
18+
| `from_` | `from` | string | no | `None`/`undefined` | Either `"latest"`, `"explicit"`, or a 64‑char SHA‑256 hash |
19+
| `variables` | `variables` | dict/object | no | `None`/`undefined` | Template variables to render `{{variable}}` tokens |
1720

1821
Notes:
1922

20-
- Exactly one of `content` or `from_/from` must be provided.
21-
- `from="latest"` fetches the latest version bound to the task; otherwise `from_` must be a 64‑char hex SHA‑256 hash.
23+
- In Python, use `from_` (with underscore) as `from` is a reserved keyword. TypeScript uses `from` directly.
24+
- Exactly one of `content` or `from` must be provided (except when using `from: "explicit"` with `content`).
25+
- `from="latest"` fetches the latest version bound to the task; otherwise `from` must be a 64‑char hex SHA‑256 hash.
2226

2327
## Behavior
2428

@@ -34,19 +38,21 @@ OpenAI integration: when `prompt_version_id` is present, the SDK will automatica
3438

3539
## Return Value
3640

37-
- `str`: Decorated prompt content ready to pass into LLM clients.
41+
- **Python**: `str` - Decorated prompt content ready to pass into LLM clients.
42+
- **TypeScript**: `Promise<string>` - Async function returning decorated prompt content.
3843

3944
## Errors
4045

41-
| Error | When |
42-
| ---------------------- | ---- |
43-
| `ValueError` | Both `content` and `from_` provided, or neither; invalid `from_` (not `"latest"` or 64‑char hex) |
44-
| `PromptRequestError` | `from_="latest"` but no versions exist for `name` |
45-
| `PromptNotFoundError` | `from_` is a hash that does not exist for `name` |
46+
| Python | TypeScript | When |
47+
| --- | --- | --- |
48+
| `ValueError` | `Error` | Both `content` and `from` provided (except explicit), or neither; invalid `from` value |
49+
| `PromptRequestError` | `PromptRequestError` | `from="latest"` but no versions exist for `name` |
50+
| `PromptNotFoundError` | `PromptNotFoundError` | `from` is a hash that does not exist for `name` |
4651

4752
## Examples
4853

49-
```python
54+
<CodeGroup>
55+
```python Python
5056
import zeroeval as ze
5157

5258
# Create/ensure a version by content
@@ -62,5 +68,22 @@ system = ze.prompt(name="support-triage", from_="latest")
6268
# Fetch a specific version by content hash
6369
system = ze.prompt(name="support-triage", from_="c6a7...deadbeef...0123")
6470
```
71+
```typescript TypeScript
72+
import * as ze from 'zeroeval';
73+
74+
// Create/ensure a version by content
75+
const system = await ze.prompt({
76+
name: "support-triage",
77+
content: "You are a helpful assistant for {{product}}.",
78+
variables: { product: "Acme" },
79+
});
80+
81+
// Fetch the latest version for this task
82+
const system = await ze.prompt({ name: "support-triage", from: "latest" });
83+
84+
// Fetch a specific version by content hash
85+
const system = await ze.prompt({ name: "support-triage", from: "c6a7...deadbeef...0123" });
86+
```
87+
</CodeGroup>
6588

6689

0 commit comments

Comments
 (0)