Skip to main content

Improving LLM-as-judge evaluators using human feedback (beta)

Recommended Reading

Before diving into this content, it might be helpful to read the following:

Reliable LLM-as-a-judge evaluators are critical for making informed decisions about your AI applications (eg. prompt, model, architecture changes, etc.). But, getting your evaluator prompt just right can be challenging, impacting the trustworthiness of your evaluations.

This guide will walk you through aligning your LLM-as-a-judge evaluator using human feedback, ultimately improving your evaluator's quality to help you build reliable AI applications.

How it works​

LangSmith's Align Evaluator feature has a series of steps that help you align your LLM-as-a-judge evaluator with human expert feedback:

  1. Select one or more experiments that contain outputs from your application.
  2. Add the selected experiments to an annotation queue where a human expert can label the data.
  3. Test your LLM-as-a-judge evaluator prompt against the labeled examples. Check the cases where your evaluator result is not aligned with the labeled data. This indicates areas where your evaluator prompt needs improvement.
  4. Refine and repeat to improve evaluator alignment. Update your LLM-as-a-judge evaluator prompt and test again.
Offline evaluators support only

Currently this feature is only supported for evaluations over a dataset (offline evaluations), however we do plan to support this for online evaluators in the near future.

Prerequisites​

In order to get started, you need a dataset with at least one experiment.

Collecting a representative dataset

Datasets should be representative of inputs and outputs you expect to see in production.

While you don’t need to cover every possible scenario, it’s important to include examples across the full range of expected use cases. For example, if you're building a sports bot that answers questions about baseball, basketball, and football, your dataset should include at least one labeled example from each sport.

  • You can upload or create a dataset using the SDK or the UI
  • You can run an experiment using the SDK or in the Playground

Experiment

Getting started​

Create a new evaluator using labeled data:​

  1. Navigate to the Datasets and Experiments tab
  2. Select a dataset
  3. Click on +Evaluator
  4. Choose Create from labeled data
  5. Name your feedback key. This name should reflect what you're evaluating (eg. correctness, hallucination, etc.). It is not possible to change the name of an evaluator after it is created.

Align an existing evaluator using labeled data:​

  1. Navigate to the Datasets and Experiments tab
  2. Select a dataset
  3. Go to the Evaluators tab
  4. Click on the evaluator you want to align
  5. In the "Align Evaluator with experiment data" box, click Select Experiments
note

Please note that this feature is only supported for boolean evaluators created after July 10th, 2025.

1. Select experiments​

Select one or more experiments to send for human labeling. This will add all runs from the selected experiments to an annotation queue.

Add to evaluator queue

To add any new experiments to an existing annotation queue, head to the Evaluators tab, select the evaluator you are aligning and click Add to Queue.

2. Label examples​

Label examples in the annotation queue by adding a feedback score. Once you've labeled an example, click Add to Reference Dataset.

Labeling examples

If you have a large number of examples in your experiments, you don't need to label every example to get started. We recommend starting with at least 20 examples, you can always add more later. We recccomend that the examples that you label are diverse (balanced in both 0 and 1 labels) and representative (reflective of real use cases) of your dataset to ensure that you're building a well rounded evaluator prompt.

3. Test your evaluator prompt against the labeled examples​

Once you have labeled examples, the next step is iterating on your evaluator prompt to mimic the labeled data as well as possible. This iteration is done in the Evaluator Playground.

To go to the evaluator playground: Click the View evaluator button on the top right of the evaluator queue. This will take you to the detail page of the evaluator you are aligning. Click the Evaluator Playground button to access the playground.

Evaluator Playground

In the evaluator playground you can create or edit your evaluator prompt and click Start Alignment to run it over the set of labeled examples that you created in Step 2. After running your evaluator, you'll see how its generated scores compare to your human labels. The alignment score is the percentage of examples where the evaluator's judgment matches that of the human expert.

Evaluator Playground

4. Repeat to improve evaluator alignment​

Iterate by updating your prompt and testing again to improve evaluator alignment.

tip

Updates to your evaluator prompt are not saved by default. We reccomend saving your evaluator prompt regularly, and especially after you see your alignment score improve.

The evaluator playground will show the alignment score for the most recently saved version of your evaluator prompt for comparison when you're iterating on your prompt.

Improving the alignment score of your evaluator isn't an exact science but there are a few strategies that are helpful in increasing the alignment score.

Tips for improving evaluator alignment​

1. Investigate misaligned examples

Digging into misaligned examples and trying to group them into common failure modes is a great first step for improving your evaluator alignment.

Once you have identified the common failure modes, add instructions to your evaluator prompt so the LLM knows about them. For example, you could explain that "MFA stands for "multi-factor authentication" if you notice it not understanding that specific acronym. Or you could tell it that "a good response will always contain at least 3 potential hotels to book" if it is confused on what good/bad means in your evaluator's context.

2. Inspect the reasoning behind the LLM score

To understand why the LLM scored an example the way it did, you can enable reasoning for your LLM-as-a-judge evaluator. Reasoning is helpful to understand the LLM's thought process and can help you identify common failure modes to incorporate into your evaluator prompt as well..

In order to see the reasoning in the evaluator playground, hover over the LLM score.

Enable reasoning

This will show the reasoning behind the LLM's score in the evaluator playground.

3. Add more labeled examples and validate performance

To avoid overfitting to the labeled examples, it's important to add more labeled examples and test performance, especially if you started off with a small number of examples.


Was this page helpful?


You can leave detailed feedback on GitHub.