You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: autotune/tuning/prompts.mdx
+187Lines changed: 187 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,6 +7,189 @@ description: "Use feedback on production traces to generate and validate better
7
7
8
8
ZeroEval derives prompt optimization suggestions directly from feedback on your production traces. By capturing preferences and correctness signals, we provide concrete prompt edits you can test and use for your agents.
9
9
10
+
## Submitting Feedback
11
+
12
+
Feedback is the foundation of prompt optimization. You can submit feedback for completions through the ZeroEval dashboard, the Python SDK, or the public API. Feedback helps ZeroEval understand what good and bad outputs look like for your specific use case.
13
+
14
+
### Feedback through the dashboard
15
+
16
+
The easiest way to provide feedback is through the ZeroEval dashboard. Navigate to your task's "Suggestions" tab, review incoming completions, and provide thumbs up/down feedback with optional reasons and expected outputs.
17
+
18
+
### Feedback through the SDK
19
+
20
+
For programmatic feedback submission, use the Python SDK. This is useful when you have automated evaluation systems or want to collect feedback from your application in production.
|`prompt_slug`|`str`| Yes | The slug/name of your prompt (same as used in `ze.prompt()`) |
42
+
|`completion_id`|`str`| Yes | The UUID of the completion to provide feedback on |
43
+
|`thumbs_up`|`bool`| Yes |`True` for positive feedback, `False` for negative feedback |
44
+
|`reason`|`str`| No | Optional explanation of why you gave this feedback |
45
+
|`expected_output`|`str`| No | Optional description of what the expected output should be |
46
+
|`metadata`|`dict`| No | Optional additional metadata to attach to the feedback |
47
+
48
+
<Note>
49
+
The `completion_id` is automatically tracked when you use `ze.prompt()` with automatic tracing enabled. You can access it from the OpenAI response object's `id` field, or retrieve it from your traces in the dashboard.
50
+
</Note>
51
+
52
+
#### Complete example with feedback
53
+
54
+
```python
55
+
import zeroeval as ze
56
+
from openai import OpenAI
57
+
58
+
ze.init()
59
+
client = OpenAI()
60
+
61
+
# Define your prompt
62
+
system_prompt = ze.prompt(
63
+
name="support-bot",
64
+
content="You are a helpful customer support agent."
65
+
)
66
+
67
+
# Make a completion
68
+
response = client.chat.completions.create(
69
+
model="gpt-4",
70
+
messages=[
71
+
{"role": "system", "content": system_prompt},
72
+
{"role": "user", "content": "How do I reset my password?"}
If feedback already exists for the same completion from the same user, it will be updated with the new values. This allows you to correct or refine feedback as needed.
191
+
</Warning>
192
+
10
193
## Prompt optimizations from feedback
11
194
12
195
Once you've given a good amount of feedback on the incoming traffic for a given task, you can generate prompt optimizations using that feedback by clicking on the "Optimize Prompt" button in the "Suggestions" tab for the task.
@@ -16,3 +199,7 @@ Once you've given a good amount of feedback on the incoming traffic for a given
16
199
Once you've generated a new prompt, you can test it with various models and see how it performs against the feedback you've already given.
When you submit feedback for a completion that has an associated span (from automatic tracing), the feedback is automatically mirrored to the span. This enables integration with tuning datasets and advanced optimization workflows, ensuring your feedback is available wherever you need it.
0 commit comments