You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ZeroEval derives prompt optimization suggestions directly from feedback on your production traces. By capturing preferences and correctness signals, we provide concrete prompt edits you can test and use for your agents.
8
+
ZeroEval derives prompt optimization suggestions directly from feedback on your production traces. By capturing preferences and corrections, we provide concrete prompt edits you can test and use for your agents.
Copy file name to clipboardExpand all lines: feedback/introduction.mdx
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ZeroEval supports two kinds of feedback:
10
10
-**Human feedback** -- thumbs-up/down, star ratings, corrections, and expected outputs submitted by users or reviewers
11
11
-**AI feedback** -- automated evaluations from calibrated judges that score outputs against criteria you define
12
12
13
-
Both feed into the same system. Feedback attached to completions powers [prompt optimization](/autotune/introduction). Signals attached to spans, traces, and sessions let you filter and monitor quality across your entire system.
13
+
Both feed into the same system. Feedback attached to completions powers [prompt optimization](/autotune/introduction). You can also retrieve unified feedback -- combining human reviews and judge evaluations -- for any span, trace, or session via the [Feedback API](/feedback/api-reference#unified-entity-feedback).
14
14
15
15
## How feedback flows
16
16
@@ -25,8 +25,8 @@ Both feed into the same system. Feedback attached to completions powers [prompt
25
25
criteria.
26
26
</Step>
27
27
<Steptitle="Quality becomes measurable">
28
-
Feedback appears on traces and completions in the console. Filter by
29
-
thumbs-up rate, judge scores, or custom signals to find patterns.
28
+
Feedback appears on spans, traces, and completions in the console. Filter by
29
+
thumbs-up rate, judge scores, or tags to find patterns.
30
30
</Step>
31
31
<Steptitle="Improvements are driven by data">
32
32
Use feedback to optimize prompts, compare models, calibrate judges, and
0 commit comments