You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: autotune/prompts/prompts.mdx
+81-13Lines changed: 81 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,9 +17,10 @@ The easiest way to provide feedback is through the ZeroEval dashboard. Navigate
17
17
18
18
### Feedback through the SDK
19
19
20
-
For programmatic feedback submission, use the Python SDK. This is useful when you have automated evaluation systems or want to collect feedback from your application in production.
20
+
For programmatic feedback submission, use the Python or TypeScript SDK. This is useful when you have automated evaluation systems or want to collect feedback from your application in production.
|`prompt_slug`|`str`| Yes | The slug/name of your prompt (same as used in `ze.prompt()`) |
42
-
|`completion_id`|`str`| Yes | The UUID of the completion to provide feedback on |
43
-
|`thumbs_up`|`bool`| Yes |`True` for positive feedback, `False` for negative feedback|
44
-
|`reason`|`str`| No | Optional explanation of why you gave this feedback |
45
-
|`expected_output`|`str`| No | Optional description of what the expected output should be |
46
-
|`metadata`|`dict`| No | Optional additional metadata to attach to the feedback |
55
+
|Python | TypeScript| Type | Required | Description |
56
+
| --- | --- | --- | --- | --- |
57
+
|`prompt_slug`|`promptSlug`|`str`/`string`| Yes | The slug/name of your prompt (same as used in `ze.prompt()`) |
58
+
|`completion_id`|`completionId`|`str`/`string`| Yes | The UUID of the completion to provide feedback on |
59
+
|`thumbs_up`|`thumbsUp`|`bool`/`boolean`| Yes |`True`/`true` for positive, `False`/`false` for negative |
60
+
|`reason`|`reason`|`str`/`string`| No | Optional explanation of why you gave this feedback |
61
+
|`expected_output`|`expectedOutput`|`str`/`string`| No | Optional description of what the expected output should be |
62
+
|`metadata`|`metadata`|`dict`/`object`| No | Optional additional metadata to attach to the feedback |
47
63
48
64
<Note>
49
65
The `completion_id` is automatically tracked when you use `ze.prompt()` with automatic tracing enabled. You can access it from the OpenAI response object's `id` field, or retrieve it from your traces in the dashboard.
50
66
</Note>
51
67
52
68
#### Complete example with feedback
53
69
54
-
```python
70
+
<CodeGroup>
71
+
```python Python
55
72
import zeroeval as ze
56
73
from openai import OpenAI
57
74
@@ -90,19 +107,70 @@ ze.send_feedback(
90
107
expected_output=Noneif is_good_response else"Should include direct link: https://app.example.com/reset"
91
108
)
92
109
```
110
+
```typescript TypeScript
111
+
import*aszefrom'zeroeval';
112
+
import { OpenAI } from'openai';
113
+
114
+
ze.init();
115
+
const client =ze.wrap(newOpenAI());
116
+
117
+
// Define your prompt - ZeroEval will automatically use the latest optimized
118
+
// version from your dashboard if one exists, falling back to this content
119
+
const systemPrompt =awaitze.prompt({
120
+
name: "support-bot",
121
+
content: "You are a helpful customer support agent."
reason: isGoodResponse?"Clear step-by-step instructions":"Missing link to reset page",
146
+
expectedOutput: isGoodResponse?undefined:"Should include direct link: https://app.example.com/reset"
147
+
});
148
+
```
149
+
</CodeGroup>
93
150
94
151
<Note>
95
152
**Auto-optimization**: When you use `ze.prompt()` with `content`, ZeroEval automatically fetches the latest optimized version from your dashboard if one exists. Your `content` serves as a fallback for initial setup. This means your prompts improve automatically as you tune them, without any code changes.
96
153
97
-
If you need to test the hardcoded content specifically (e.g., for debugging or A/B testing), use `from_="explicit"`:
98
-
```python
154
+
If you need to test the hardcoded content specifically (e.g., for debugging or A/B testing), use `from_="explicit"` (Python) or `from: "explicit"` (TypeScript):
155
+
156
+
<CodeGroup>
157
+
```python Python
99
158
# Bypass auto-optimization and always use this exact content
100
159
prompt = ze.prompt(
101
160
name="support-bot",
102
161
from_="explicit",
103
162
content="You are a helpful customer support agent."
104
163
)
105
164
```
165
+
```typescript TypeScript
166
+
// Bypass auto-optimization and always use this exact content
167
+
const prompt =awaitze.prompt({
168
+
name: "support-bot",
169
+
from: "explicit",
170
+
content: "You are a helpful customer support agent."
Copy file name to clipboardExpand all lines: autotune/reference.mdx
+39-16Lines changed: 39 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,20 +5,24 @@ description: "Parameters and configuration for ze.prompt"
5
5
6
6
`ze.prompt` creates or fetches versioned prompts from the Prompt Library and returns decorated content for downstream LLM calls.
7
7
8
+
<Info>
9
+
**TypeScript differences**: In TypeScript, `ze.prompt()` is an async function that returns `Promise<string>`. Parameters use camelCase and are passed as an options object: `ze.prompt({ name: "...", content: "..." })`.
0 commit comments