|
| 1 | +--- |
| 2 | +title: "Submitting Feedback" |
| 3 | +description: "Programmatically submit feedback for judge evaluations via SDK" |
| 4 | +--- |
| 5 | + |
| 6 | +## Overview |
| 7 | + |
| 8 | +When calibrating judges, you can submit feedback programmatically using the SDK. |
| 9 | +This is useful for: |
| 10 | + |
| 11 | +- Bulk feedback submission from automated pipelines |
| 12 | +- Integration with custom review workflows |
| 13 | +- Syncing feedback from external labeling tools |
| 14 | + |
| 15 | +## Important: Using the Correct IDs |
| 16 | + |
| 17 | +Judge evaluations involve two related spans: |
| 18 | + |
| 19 | +| ID | Description | |
| 20 | +|---|---| |
| 21 | +| **Source Span ID** | The original LLM call that was evaluated | |
| 22 | +| **Judge Call Span ID** | The span created when the judge ran its evaluation | |
| 23 | + |
| 24 | +When submitting feedback, always include the `judge_id` parameter to ensure |
| 25 | +feedback is correctly associated with the judge evaluation. |
| 26 | + |
| 27 | +## Python SDK |
| 28 | + |
| 29 | +### From the UI (Recommended) |
| 30 | + |
| 31 | +The easiest way to get the correct IDs is from the Judge Evaluation modal: |
| 32 | + |
| 33 | +1. Open a judge evaluation in the dashboard |
| 34 | +2. Expand the "SDK Integration" section |
| 35 | +3. Click "Copy" to copy the pre-filled Python code |
| 36 | +4. Paste and customize the generated code |
| 37 | + |
| 38 | +### Manual Submission |
| 39 | + |
| 40 | +```python |
| 41 | +from zeroeval import ZeroEval |
| 42 | + |
| 43 | +client = ZeroEval() |
| 44 | + |
| 45 | +# Submit feedback for a judge evaluation |
| 46 | +client.send_feedback( |
| 47 | + prompt_slug="your-judge-task-slug", # The task/prompt associated with the judge |
| 48 | + completion_id="span-id-here", # The span ID from the evaluation |
| 49 | + thumbs_up=True, # True = correct, False = incorrect |
| 50 | + reason="Optional explanation", |
| 51 | + judge_id="automation-id-here", # Required for judge feedback |
| 52 | +) |
| 53 | +``` |
| 54 | + |
| 55 | +### Parameters |
| 56 | + |
| 57 | +| Parameter | Type | Required | Description | |
| 58 | +|---|---|---|---| |
| 59 | +| `prompt_slug` | str | Yes | The task slug associated with the judge | |
| 60 | +| `completion_id` | str | Yes | The span ID being evaluated | |
| 61 | +| `thumbs_up` | bool | Yes | `True` if judge was correct, `False` if wrong | |
| 62 | +| `reason` | str | No | Explanation of the feedback | |
| 63 | +| `judge_id` | str | Yes* | The judge automation ID (*required for judge feedback) | |
| 64 | + |
| 65 | +## REST API |
| 66 | + |
| 67 | +```bash |
| 68 | +curl -X POST "https://api.zeroeval.com/v1/prompts/{task_slug}/completions/{span_id}/feedback" \ |
| 69 | + -H "Authorization: Bearer $ZEROEVAL_API_KEY" \ |
| 70 | + -H "Content-Type: application/json" \ |
| 71 | + -d '{ |
| 72 | + "thumbs_up": true, |
| 73 | + "reason": "Judge correctly identified the issue", |
| 74 | + "judge_id": "automation-uuid-here" |
| 75 | + }' |
| 76 | +``` |
| 77 | + |
| 78 | +## Finding Your IDs |
| 79 | + |
| 80 | +| ID | Where to Find It | |
| 81 | +|---|---| |
| 82 | +| **Task Slug** | In the judge settings, or the URL when editing the judge's prompt | |
| 83 | +| **Span ID** | In the evaluation modal, or via `get_judge_evaluations()` response | |
| 84 | +| **Judge ID** | In the URL when viewing a judge (`/judges/{judge_id}`) | |
| 85 | + |
| 86 | +## Bulk Feedback Submission |
| 87 | + |
| 88 | +For submitting feedback on multiple evaluations, you can iterate through evaluations: |
| 89 | + |
| 90 | +```python |
| 91 | +from zeroeval import ZeroEval |
| 92 | + |
| 93 | +client = ZeroEval() |
| 94 | + |
| 95 | +# Get evaluations to review |
| 96 | +evaluations = client.get_judge_evaluations( |
| 97 | + project_id="your-project-id", |
| 98 | + judge_id="your-judge-id", |
| 99 | + limit=100, |
| 100 | +) |
| 101 | + |
| 102 | +# Submit feedback for each |
| 103 | +for eval in evaluations["evaluations"]: |
| 104 | + # Your logic to determine if the evaluation was correct |
| 105 | + is_correct = your_review_logic(eval) |
| 106 | + |
| 107 | + client.send_feedback( |
| 108 | + prompt_slug="your-judge-task-slug", |
| 109 | + completion_id=eval["span_id"], |
| 110 | + thumbs_up=is_correct, |
| 111 | + reason="Automated review", |
| 112 | + judge_id="your-judge-id", |
| 113 | + ) |
| 114 | +``` |
| 115 | + |
| 116 | +## Related |
| 117 | + |
| 118 | +- [Pulling Evaluations](/judges/pull-evaluations) - Retrieve judge evaluations programmatically |
| 119 | +- [Judge Setup](/judges/setup) - Configure and deploy judges |
0 commit comments