Skip to main content

Run pairwise evaluations

Recommended Reading

Before diving into this content, it might be helpful to read the following:

LangSmith supports evaluating existing experiments in a comparative manner. This allows you to use automatic evaluators (especially, LLM-based evaluators) to score the outputs from multiple experiments against each other, rather than being confined to evaluating outputs one at a time. Think LMSYS Chatbot Arena - this is the same concept! To do this, use the evaluate_comparative / evaluateComparative function with two existing experiments.

If you haven't already created experiments to compare, check out our quick start or oue how-to guide to get started with evaluations.

Use the evaluate_comparative function


Pairwise evaluations currently require langsmith SDK Python version >=0.1.55 or JS version >=0.1.24.

At its simplest, evaluate_comparative / evaluateComparative function takes the following arguments:

experimentsA list of the two existing experiments you would like to evaluate against each other. These can be uuids or experiment names.
evaluatorsA list of the pairwise evaluators that you would like to attach to this evaluation. See the section below for how to define these.

Along with these, you can also pass in the following optional args:

randomize_order / randomizeOrderAn optional boolean indicating whether the order of the outputs should be randomized for each evaluation. This is a strategy for minimizing positional bias in your prompt: often, the LLM will be biased towards one of the responses based on the order. This should mainly be addressed via prompt engineering, but this is another optional mitigation. Defaults to False.
experiment_prefix / experimentPrefixA prefix to be attached to the beginning of the pairwise experiment name. Defaults to None.
descriptionA description of the pairwise experiment. Defaults to None.
max_concurrency / maxConcurrencyThe maximum number of concurrent evaluations to run. Defaults to 5.
clientThe LangSmith client to use. Defaults to None.
metadataMetadata to attach to your pairwise experiment. Defaults to None.
load_nested / loadNestedWhether to load all child runs for the experiment. When False, only the root trace will be passed to your evaluator. Defaults to False.

Configure inputs and outputs for pairwise evaluators

Inputs: A list of Runs and a single Example. This is exactly the same as a normal evaluator, except with a list of Runs instead of a single Run. The list of runs will have a length of two. You can access the inputs and outputs with runs[0].inputs, runs[0].outputs, runs[1].inputs, runs[1].outputs, example.inputs, and example.outputs.

Output: Your evaluator should return a dictionary with two keys:

  • key, which represents the feedback key that will be logged
  • scores, which is a mapping from run ID to score for that run. We strongly encourage using 0 and 1 as the score values, where 1 is better. You may also set both to 0 to represent "both equally bad" or both to 1 for "both equally good".

Note that you should choose a feedback key that is distinct from standard feedbacks on your run. We recommend prefixing pairwise feedback keys with pairwise_ or ranked_.

Compare two experiments with LLM-based pairwise evaluators

The following example uses a prompt which asks the LLM to decide which is better between two AI assistant responses. It uses structured output to parse the AI's response: 0, 1, or 2.

Optional LangChain Usage

In the Python example below, we are pulling this structured prompt from the LangChain Hub and using it with a LangChain LLM wrapper. The prompt asks the LLM to decide which is better between two AI assistant responses. It uses structured output to parse the AI's response: 0, 1, or 2.

Usage of LangChain is totally optional. To illustrate this point, the TypeScript example below uses the OpenAI API directly.

from langsmith.evaluation import evaluate_comparative
from langchain import hub
from langchain_openai import ChatOpenAI
from langsmith.schemas import Run, Example
prompt = hub.pull("langchain-ai/pairwise-evaluation-2")

def evaluate_pairwise(runs: list[Run], example: Example):
scores = {}

# Create the model to run your evaluator
model = ChatOpenAI(model_name="gpt-4")

runnable = prompt | model
response = runnable.invoke({
"question": example.inputs["question"],
"answer_a": runs[0].outputs["output"] if runs[0].outputs is not None else "N/A",
"answer_b": runs[1].outputs["output"] if runs[1].outputs is not None else "N/A",
score = response["Preference"]
if score == 1:
scores[runs[0].id] = 1
scores[runs[1].id] = 0
elif score == 2:
scores[runs[0].id] = 0
scores[runs[1].id] = 1
scores[runs[0].id] = 0
scores[runs[1].id] = 0
return {"key": "ranked_preference", "scores": scores}

# Replace the following array with the names or IDs of your experiments
["my-experiment-name-1", "my-experiment-name-2"],

View pairwise experiments

Navigate to the "Pairwise Experiments" tab from the dataset page:

Pairwise Experiments Tab

Click on a pairwise experiment that you would like to inspect, and you will be brought to the Comparison View:

Pairwise Comparison View

You may filter to runs where the first experiment was better or vice versa by clicking the thumbs up/thumbs down buttons in the table header:

Pairwise Filtering

Was this page helpful?

You can leave detailed feedback on GitHub.