Skip to content

Framework for evaluating and giving structured feedback to LLM-based agents (like OpenAI's GPT models). It provides a feedback loop mechanism to analyze, critique, and improve the performance of LLMs through real-time or post-hoc feedback using structured evaluation metrics or natural language.

Notifications You must be signed in to change notification settings

fintools-ai/llm-agent-evaluator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

llm-agent-evaluator-

Framework for evaluating and giving structured feedback to LLM-based agents (like OpenAI's GPT models). It provides a feedback loop mechanism to analyze, critique, and improve the performance of LLMs through real-time or post-hoc feedback using structured evaluation metrics or natural language.

About

Framework for evaluating and giving structured feedback to LLM-based agents (like OpenAI's GPT models). It provides a feedback loop mechanism to analyze, critique, and improve the performance of LLMs through real-time or post-hoc feedback using structured evaluation metrics or natural language.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published