LangSmith
Introduction
LangSmith is a platform for building production-grade LLM applications.
It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.
LangSmith is developed by LangChain, the company behind the open source LangChain framework.
LangSmith is currently in private beta. If you'd like to be added to the waitlist, please sign up for an account here.
LangSmith is an early product in beta. If you run into any urgent issues or bugs, please reach out to us at support@langchain.dev.
We appreciate your patience.
Quick Start
If following along with code is more your thing, we've set up a Jupyter notebook at this link to help you get started with LangSmith.
- LangChain
- Without LangChain
If you already use LangChain, you can connect to LangSmith in a few steps:
- Create a LangSmith account using one of the supported login methods.
- Create an API Key by navigating to the settings page.
- Install the latest version
LangChain
for your target environment and programming language. - pip
- yarn
- npm
- pnpm
- Configure runtime environment:
- Replace
"<your-api-key>"
with the API key generated in step 1 - Replace
"<your-openai-api-key>"
with an OpenAI API Key from here
- Replace
- Shell
- Run the example code below.
- Python
- TypeScript
pip install -U langchain
yarn add langchain
npm install -S langchain
pnpm add langchain
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
# The below examples use the OpenAI API, so you will need
export OPENAI_API_KEY=<your-openai-api-key>
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI()
llm.invoke("Hello, world!")
import { ChatOpenAI } from "langchain/chat_models/openai";
const llm = new ChatOpenAI()
await llm.invoke("Hello, world!");
For environments where process.env is not defined, initialize by explicitly passing keys:
import { Client } from "langsmith";
import { LangChainTracer } from "langchain/callbacks";
const client = new Client({
apiUrl: "https://api.smith.langchain.com",
apiKey: "YOUR_API_KEY"
});
const tracer = new LangChainTracer({
projectName: "YOUR_PROJECT_NAME",
client
});
const model = new ChatOpenAI({
openAIApiKey: "YOUR_OPENAI_API_KEY"
});
await model.invoke("Hello, world!", { callbacks: [tracer] })
If you don't want to use LangChain in your LLM application, you can get started with LangSmith in just a few steps:
- Create a LangSmith account using one of the supported login methods.
- Create an API Key by navigating to the settings page.
- Install the LangSmith SDK for your target environment.
- pip
- yarn
- npm
- pnpm
- Configure runtime environment:
- Replace
"<your-api-key>"
with the API key generated in step 1 - Replace
"<your-openai-api-key>"
with an OpenAI API Key from here
- Replace
- Shell
- Run the example code below. So long as the environment is correctly configured, all your LangChain code will be traced.
- Python
- Python (Run Tree)
- TypeScript
pip install -U langsmith
yarn add langsmith
npm install -S langsmith
pnpm add langsmith
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
# The below examples use the OpenAI API, so you will need
export OPENAI_API_KEY=<your-openai-api-key>
import datetime
from typing import Any
import openai
from openai.openai_object import OpenAIObject
from langsmith.run_helpers import traceable
@traceable(run_type="llm", name="openai.ChatCompletion.create")
def my_chat_model(*args: Any, **kwargs: Any) -> OpenAIObject:
return openai.ChatCompletion.create(*args, **kwargs)
@traceable(run_type="tool")
def my_tool(tool_input: str) -> str:
return tool_input.upper()
@traceable(run_type="chain")
def my_chain(prompt: str) -> str:
messages = [
{
"role": "system",
"content": "You are an AI Assistant. The time is "
+ str(datetime.datetime.now()),
},
{"role": "user", "content": prompt},
]
return my_chat_model(model="gpt-3.5-turbo", messages=messages)
@traceable(run_type="chain")
def my_chat_bot(text: str) -> str:
generated = my_chain(text)
if "meeting" in generated:
return my_tool(generated)
else:
return generated
my_chat_bot("Summarize this morning's meetings.")
# See an example run at: https://smith.langchain.com/public/3e853ad8-77ce-404d-ad4c-05726851ad0f/r
from langsmith.run_trees import RunTree
parent_run = RunTree(
name="My Chat Bot",
run_type="chain",
inputs={"text": "Summarize this morning's meetings."},
serialized={}
)
child_llm_run = parent_run.create_child(
name="My Proprietary LLM",
run_type="llm",
inputs={
"prompts": [
"You are an AI Assistant. Summarize this morning's meetings."
]
},
)
child_llm_run.end(outputs={"generations": ["Summary of the meeting..."]})
parent_run.end(outputs={"output": ["The meeting notes are as follows:..."]})
res = parent_run.post(exclude_child_runs=False)
res.result()
import { RunTree, RunTreeConfig } from "langsmith";
const parentRunConfig: RunTreeConfig = {
name: "My Chat Bot",
run_type: "chain",
inputs: {
text: "Summarize this morning's meetings.",
},
serialized: {}
};
const parentRun = new RunTree(parentRunConfig);
const childLlmRun = await parentRun.createChild({
name: "My Proprietary LLM",
run_type: "llm",
inputs: {
prompts: [
"You are an AI Assistant. Summarize this morning's meetings.",
],
},
});
await childLlmRun.end({
outputs: {
generations: [
"Summary of the meeting...",
],
},
});
await parentRun.end({
outputs: {
output: ["The meeting notes are as follows:..."],
},
});
// False means post all nested runs as a batch
// (don't exclude child runs)
await parentRun.postRun(false);
Congratulations! Your first run is now visible in LangSmith! Navigate to the projects page to view your "Hello, world!" trace.
Next Steps
Read the LangSmith Overview to learn more about what LangSmith has to offer.
To learn how to best take advantage of the visualization and replay capabilities within LangSmith, check out the tracing documentation.
To start testing and evaluating your LLMs, chains, and agents, check out the testing & evaluation quickstart and related documentation.
Additional Resources
LangSmith Cookbook A collection of tutorials and end-to-end walkthroughs using LangSmith.
LangChain Python Docs for the Python LangChain library.
LangChain Python API Reference documentation to review the core APIs of LangChain.
LangChain JS Docs for the TypeScript LangChain library
Discord: Join us on our Discord to discuss all things LangChain!
If you're interested in enterprise support and LangSmith access for larger team, fill out this form to speak with sales.