Skip to content

Commit 553bf72

Browse files
updates to submitting feedback
1 parent 13b3e82 commit 553bf72

File tree

1 file changed

+187
-0
lines changed

1 file changed

+187
-0
lines changed

autotune/tuning/prompts.mdx

Lines changed: 187 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,189 @@ description: "Use feedback on production traces to generate and validate better
77

88
ZeroEval derives prompt optimization suggestions directly from feedback on your production traces. By capturing preferences and correctness signals, we provide concrete prompt edits you can test and use for your agents.
99

10+
## Submitting Feedback
11+
12+
Feedback is the foundation of prompt optimization. You can submit feedback for completions through the ZeroEval dashboard, the Python SDK, or the public API. Feedback helps ZeroEval understand what good and bad outputs look like for your specific use case.
13+
14+
### Feedback through the dashboard
15+
16+
The easiest way to provide feedback is through the ZeroEval dashboard. Navigate to your task's "Suggestions" tab, review incoming completions, and provide thumbs up/down feedback with optional reasons and expected outputs.
17+
18+
### Feedback through the SDK
19+
20+
For programmatic feedback submission, use the Python SDK. This is useful when you have automated evaluation systems or want to collect feedback from your application in production.
21+
22+
```python
23+
import zeroeval as ze
24+
25+
ze.init()
26+
27+
# Send feedback for a specific completion
28+
ze.send_feedback(
29+
prompt_slug="support-bot",
30+
completion_id="550e8400-e29b-41d4-a716-446655440000",
31+
thumbs_up=False,
32+
reason="Response was too verbose",
33+
expected_output="A concise 2-3 sentence response"
34+
)
35+
```
36+
37+
#### Parameters
38+
39+
| Parameter | Type | Required | Description |
40+
| --- | --- | --- | --- |
41+
| `prompt_slug` | `str` | Yes | The slug/name of your prompt (same as used in `ze.prompt()`) |
42+
| `completion_id` | `str` | Yes | The UUID of the completion to provide feedback on |
43+
| `thumbs_up` | `bool` | Yes | `True` for positive feedback, `False` for negative feedback |
44+
| `reason` | `str` | No | Optional explanation of why you gave this feedback |
45+
| `expected_output` | `str` | No | Optional description of what the expected output should be |
46+
| `metadata` | `dict` | No | Optional additional metadata to attach to the feedback |
47+
48+
<Note>
49+
The `completion_id` is automatically tracked when you use `ze.prompt()` with automatic tracing enabled. You can access it from the OpenAI response object's `id` field, or retrieve it from your traces in the dashboard.
50+
</Note>
51+
52+
#### Complete example with feedback
53+
54+
```python
55+
import zeroeval as ze
56+
from openai import OpenAI
57+
58+
ze.init()
59+
client = OpenAI()
60+
61+
# Define your prompt
62+
system_prompt = ze.prompt(
63+
name="support-bot",
64+
content="You are a helpful customer support agent."
65+
)
66+
67+
# Make a completion
68+
response = client.chat.completions.create(
69+
model="gpt-4",
70+
messages=[
71+
{"role": "system", "content": system_prompt},
72+
{"role": "user", "content": "How do I reset my password?"}
73+
]
74+
)
75+
76+
# Get the completion ID and text
77+
completion_id = response.id
78+
completion_text = response.choices[0].message.content
79+
80+
# Evaluate the response (manually or automatically)
81+
is_good_response = evaluate_response(completion_text)
82+
83+
# Send feedback based on evaluation
84+
ze.send_feedback(
85+
prompt_slug="support-bot",
86+
completion_id=completion_id,
87+
thumbs_up=is_good_response,
88+
reason="Clear step-by-step instructions" if is_good_response else "Missing link to reset page",
89+
expected_output=None if is_good_response else "Should include direct link: https://app.example.com/reset"
90+
)
91+
```
92+
93+
### Feedback through the API
94+
95+
For integration with non-Python systems or direct API access, you can submit feedback using the public HTTP API.
96+
97+
#### Endpoint
98+
99+
```
100+
POST /v1/prompts/{prompt_slug}/completions/{completion_id}/feedback
101+
```
102+
103+
#### Authentication
104+
105+
Requires API key authentication via the `Authorization` header:
106+
107+
```
108+
Authorization: Bearer YOUR_API_KEY
109+
```
110+
111+
#### Request body
112+
113+
```json
114+
{
115+
"thumbs_up": false,
116+
"reason": "Response was inaccurate",
117+
"expected_output": "The correct answer should mention X, Y, and Z",
118+
"metadata": {
119+
"evaluated_by": "automated_system",
120+
"evaluation_score": 0.45
121+
}
122+
}
123+
```
124+
125+
#### Response
126+
127+
```json
128+
{
129+
"id": "fb123e45-67f8-90ab-cdef-1234567890ab",
130+
"completion_id": "550e8400-e29b-41d4-a716-446655440000",
131+
"prompt_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
132+
"prompt_version_id": "b2c3d4e5-f6a7-8901-bcde-f12345678901",
133+
"project_id": "c3d4e5f6-a7b8-9012-cdef-123456789012",
134+
"thumbs_up": false,
135+
"reason": "Response was inaccurate",
136+
"expected_output": "The correct answer should mention X, Y, and Z",
137+
"metadata": {
138+
"evaluated_by": "automated_system",
139+
"evaluation_score": 0.45
140+
},
141+
"created_by": "user_id",
142+
"created_at": "2025-11-22T10:30:00Z",
143+
"updated_at": "2025-11-22T10:30:00Z"
144+
}
145+
```
146+
147+
#### Example with cURL
148+
149+
```bash
150+
curl -X POST https://api.zeroeval.com/v1/prompts/support-bot/completions/550e8400-e29b-41d4-a716-446655440000/feedback \
151+
-H "Authorization: Bearer YOUR_API_KEY" \
152+
-H "Content-Type: application/json" \
153+
-d '{
154+
"thumbs_up": false,
155+
"reason": "Response was too vague",
156+
"expected_output": "Should provide specific steps",
157+
"metadata": {
158+
"user_satisfaction": "low"
159+
}
160+
}'
161+
```
162+
163+
#### Example with JavaScript/TypeScript
164+
165+
```typescript
166+
const response = await fetch(
167+
'https://api.zeroeval.com/v1/prompts/support-bot/completions/550e8400-e29b-41d4-a716-446655440000/feedback',
168+
{
169+
method: 'POST',
170+
headers: {
171+
'Authorization': 'Bearer YOUR_API_KEY',
172+
'Content-Type': 'application/json',
173+
},
174+
body: JSON.stringify({
175+
thumbs_up: false,
176+
reason: 'Response was too vague',
177+
expected_output: 'Should provide specific steps',
178+
metadata: {
179+
user_satisfaction: 'low'
180+
}
181+
})
182+
}
183+
);
184+
185+
const feedback = await response.json();
186+
console.log('Feedback submitted:', feedback);
187+
```
188+
189+
<Warning>
190+
If feedback already exists for the same completion from the same user, it will be updated with the new values. This allows you to correct or refine feedback as needed.
191+
</Warning>
192+
10193
## Prompt optimizations from feedback
11194

12195
Once you've given a good amount of feedback on the incoming traffic for a given task, you can generate prompt optimizations using that feedback by clicking on the "Optimize Prompt" button in the "Suggestions" tab for the task.
@@ -16,3 +199,7 @@ Once you've given a good amount of feedback on the incoming traffic for a given
16199
Once you've generated a new prompt, you can test it with various models and see how it performs against the feedback you've already given.
17200

18201
<video src="/videos/auto-tuning-model-leaderboard.mp4" alt="Model leaderboard" controls muted playsInline loop preload="metadata" />
202+
203+
## Integration with span feedback
204+
205+
When you submit feedback for a completion that has an associated span (from automatic tracing), the feedback is automatically mirrored to the span. This enables integration with tuning datasets and advanced optimization workflows, ensuring your feedback is available wherever you need it.

0 commit comments

Comments
 (0)