Annotations
Last updated
Was this helpful?
Last updated
Was this helpful?
In order to improve your LLM application iteratively, it's vital to collect feedback, annotate data during human review, as well as to establish an evaluation pipeline so that you can monitor your application. In Phoenix we capture this type of feedback in the form of annotations.
Phoenix gives you the ability to annotate traces with feedback from the UI, your application, or wherever you would like to perform evaluation. Phoenix's annotation model is simple yet powerful - given an entity such as a span that is collected, you can assign a label
and/or a score
to that entity.
Navigate to the Feedback tab in this demo trace to see how LLM-based evaluations appear in Phoenix:
Learn more about the concepts Concepts: Annotations
Configure Annotation Configs to guide human annotations.
How to run Running Evals on Traces
Learn how to log annotations via the client from your app or in a notebook