Skip to content

Commit 2325fb5

Browse files
committed
Fix broken 'evaluate models' link to existing page (/judges/introduction) [bug: 77f47714-f7b0-4204-8804-5c357be07b49]
1 parent 9cf7891 commit 2325fb5

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

autotune/introduction.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ZeroEval Prompts gives you version control for prompts with a single function ca
1111

1212
- **Version history** -- every prompt change creates a new version you can compare and roll back to
1313
- **Production visibility** -- see exactly which prompt version is running, how often it's called, and what it produces
14-
- **Feedback loop** -- attach thumbs-up/down feedback to completions, then use it to [optimize prompts](/autotune/prompts/prompts) and [evaluate models](/autotune/prompts/models)
14+
- **Feedback loop** -- attach thumbs-up/down feedback to completions, then use it to [optimize prompts](/autotune/prompts/prompts) and [evaluate models](/judges/introduction)
1515
- **One-click deployments** -- push a winning prompt or model to production without redeploying your app
1616

1717
## How it works

0 commit comments

Comments
 (0)