Skip to main content

RAG evaluation with RAGAS

Ragas is a popular framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines.

This notebook shows how you can integrate their excellent RAG metrics in LangSmith to evaluate your RAG app.

For this example, we will grade a simple RAG application based on the following metrics. Most metrics use an LLM as a judge in some capacity:

Labeled generated metrics:

  • answer_correctness - Is the response correct, based on the ground_truth response.

Reference-free generator metrics:

  • faithfulness - Proportion of claims in the response that are grounded in the retrieved context (entailment-based).

Reference-free retriever metrics:

  • context_relevancy - Proportion of retrieved sentences that are "relevant" to the user question.

Labeled retriever metrics:

  • context_recall - Proportion of the ground truth answer that can be attributed to the docs.
  • context_precision - Are the relevant docs (according to the ground truth) ranked higher? Works by scoring each document as useful in deducing the ground truth, computing precision @ K using that score for each K, then averaging over the total number of useful docs.

In reality, you likely won't need to apply all of these metrics at the same time, but each metric can shed a bit of light on the different aspects of your retriever and generator setup.

Prerequisites

Install recent versions of the required dependencies, and configure your environment with the appropriate LANGSMITH and OpenAI keys.

%%capture --no-stderr
%pip install -U langsmith ragas numpy openai
import getpass
import os

os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = getpass.getpass("LANGCHAIN_API_KEY")
os.environ["OPENAI_API_KEY"] = getpass.getpass("OPENAI_API_KEY")

Dataset

Clone the BaseCamp Q&A dataset to your organization to get stated. This was generated by synthetically generating questions over scraped documents from the 37signals handbook.

import langsmith

client = langsmith.Client()
dataset_url = (
"https://smith.langchain.com/public/56fe54cd-b7d7-4d3b-aaa0-88d7a2d30931/d"
)
dataset_name = "BaseCamp Q&A"
client.clone_public_dataset(dataset_url)

Define your pipeline

First, download the source docs. We've saved the raw markdown files in a zipfile to make this easier.

import io
import os
import zipfile

import requests

# Fetch the source documents
url = "https://storage.googleapis.com/benchmarks-artifacts/basecamp-data/basecamp-data.zip"

response = requests.get(url)


with io.BytesIO(response.content) as zipped_file:
with zipfile.ZipFile(zipped_file, "r") as zip_ref:
zip_ref.extractall()

data_dir = os.path.join(os.getcwd(), "data")
docs = []
for filename in os.listdir(data_dir):
if filename.endswith(".md"):
with open(os.path.join(data_dir, filename), "r") as file:
docs.append({"file": filename, "content": file.read()})

Next, create the retriever. For our purposes, a simple in-memory vectorstore retriever will suffice.

from typing import List

import numpy as np
import openai
from langsmith import traceable


class VectorStoreRetriever:
def __init__(self, docs: list, vectors: list, oai_client):
self._arr = np.array(vectors)
self._docs = docs
self._client = oai_client

@classmethod
async def from_docs(cls, docs, oai_client):
embeddings = await oai_client.embeddings.create(
model="text-embedding-3-small", input=[doc["content"] for doc in docs]
)
vectors = [emb.embedding for emb in embeddings.data]
return cls(docs, vectors, oai_client)

@traceable
async def query(self, query: str, k: int = 5) -> List[dict]:
embed = await self._client.embeddings.create(
model="text-embedding-3-small", input=[query]
)
# "@" is just a matrix multiplication in python
scores = np.array(embed.data[0].embedding) @ self._arr.T
top_k_idx = np.argpartition(scores, -k)[-k:]
top_k_idx_sorted = top_k_idx[np.argsort(-scores[top_k_idx])]
return [
{**self._docs[idx], "similarity": scores[idx]} for idx in top_k_idx_sorted
]
from langsmith import traceable
from langsmith.wrappers import wrap_openai


class NaiveRagBot:
def __init__(self, retriever, model: str = "gpt-4-turbo-preview"):
self._retriever = retriever
# Wrapping the client instruments the LLM
# and is completely optional
self._client = wrap_openai(openai.AsyncClient())
self._model = model

@traceable
async def get_answer(self, question: str):
similar = await self._retriever.query(question)
response = await self._client.chat.completions.create(
model=self._model,
messages=[
{
"role": "system",
"content": "You are a helpful AI assistant."
" Use the following docs to help answer the user's question.\n\n"
f"## Docs\n\n{similar}",
},
{"role": "user", "content": question},
],
)

# The RAGAS evaluators expect the "answer" and "contexts"
# keys to work properly. If your pipeline does not return these values,
# you should wrap in a function that provides them.
return {
"answer": response.choices[0].message.content,
"contexts": [str(doc) for doc in similar],
}
retriever = await VectorStoreRetriever.from_docs(docs, openai.AsyncClient())
rag_bot = NaiveRagBot(retriever)
response = await rag_bot.get_answer("How much time off do we get?")
response["answer"][:150]
'According to the provided documents, at 37signals, employees are entitled to various forms of time off, including:\n\n1. **Paid Time Off (Vacation Time)'

Evaluate.

Ragas provides you with a different metrics that you can use to evaluate each component of your RAG pipeline. You can see the entire list in the docs. We will select a few useful ones below.

To use a RAGAS metric, simply wrap as an EvaluatorChain.

from langchain.smith import RunEvalConfig
from ragas.integrations.langchain import EvaluatorChain
from ragas.metrics import (
answer_correctness,
answer_relevancy,
context_precision,
context_recall,
context_relevancy,
faithfulness,
)

# Wrap the RAGAS metrics to use in LangChain
evaluators = [
EvaluatorChain(metric)
for metric in [
answer_correctness,
answer_relevancy,
context_precision,
context_recall,
faithfulness,
]
]
eval_config = RunEvalConfig(custom_evaluators=evaluators)
results = await client.arun_on_dataset(
dataset_name=dataset_name,
llm_or_chain_factory=rag_bot.get_answer,
evaluation=eval_config,
)
View the evaluation results for project 'back-bibliography-2' at:
https://smith.langchain.com/o/ebbaf2eb-769b-4505-aca2-d11de10372a4/datasets/e468392e-e369-4066-99d4-dd05e186e992/compare?selectedSessions=bf005eaa-498d-47f7-a752-4bd260000c23

View all tests for Dataset BaseCamp Q&A at:
https://smith.langchain.com/o/ebbaf2eb-769b-4505-aca2-d11de10372a4/datasets/e468392e-e369-4066-99d4-dd05e186e992
[-> ] 1/21

/Users/wfh/code/lc/langchain/libs/core/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(


[----------------> ] 7/21

Invalid JSON response. Expected dictionary with key 'question'


[-----------------------> ] 10/21

Invalid JSON response. Expected dictionary with key 'question'
Invalid response format. Expected a list of dictionaries with keys 'verdict'
Invalid JSON response. Expected dictionary with key 'question'
Invalid JSON response. Expected dictionary with key 'Attributed'


[----------------------------> ] 12/21

Invalid JSON response. Expected dictionary with key 'question'


[------------------------------------------> ] 18/21

Invalid JSON response. Expected dictionary with key 'question'


[--------------------------------------------> ] 19/21

Invalid JSON response. Expected dictionary with key 'Attributed'


[------------------------------------------------->] 21/21

Invalid JSON response. Expected dictionary with key 'question'


Was this page helpful?