Tracing Quick Start
You can get started with LangSmith tracing using either LangChain, the Python SDK, the TypeScript SDK, or the API. The following sections provide a quick start guide for each of these options.
First, create an API key by navigating to the settings page, then follow the instructions below:
- Python SDK
- TypeScript SDK
- LangChain
- API
1. Install the LangSmith library
Start by installing the Python library.
- Shell
pip install langsmith
2. Configure your environment
- Shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
# The below examples use the OpenAI API, though it's not necessary in general
export OPENAI_API_KEY=<your-openai-api-key>
3. Log a trace
We provide multiple ways to log traces to LangSmith. Below, we'll
highlight how to use our simple @traceable
decorator. See
more in the Integrations section.
import openai
from langsmith.wrappers import wrap_openai
from langsmith import traceable
# Auto-trace LLM calls in-context
client = wrap_openai(openai.Client())
@traceable # Auto-trace this function
def pipeline(user_input: str):
result = client.chat.completions.create(
messages=[{"role": "user", "content": user_input}],
model="gpt-4o-mini"
)
return result.choices[0].message.content
pipeline("Hello, world!")
# Out: Hello there! How can I assist you today?
4. View the trace
By default, the trace will be logged to the project with the name
default
. You can change the project you log to by following
the instructions
here
. An example of a trace logged using the above code is made public and can be viewed here .
1. Install the LangSmith library
Start by installing the TypeScript library.
- npm
- yarn
- pnpm
- bun
npm install langsmith
yarn add langsmith
pnpm add langsmith
bun add langsmith
2. Configure your environment
- Shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
# The below examples use the OpenAI API, though it's not necessary in general
export OPENAI_API_KEY=<your-openai-api-key>
3. Log a Trace
We provide multiple ways to log traces to LangSmith. Below, we'll
highlight how to use our simple traceable
higher order
function (HOF). See more in the Integrations
section.
import { OpenAI } from "openai";
import { traceable } from "langsmith/traceable";
import { wrapOpenAI } from "langsmith/wrappers";
// Auto-trace LLM calls in-context
const client = wrapOpenAI(new OpenAI());
// Auto-trace this function
const pipeline = traceable(async (user_input) => {
const result = await client.chat.completions.create({
messages: [{ role: "user", content: user_input }],
model: "gpt-4o-mini",
});
return result.choices[0].message.content;
});
await pipeline("Hello, world!")
// Out: Hello there! How can I assist you today?
4. View the trace
By default, the trace will be logged to the project with the name
default
. You can change the project you log to by following
the instructions
here
. An example of a trace logged using the above code is made public and can be viewed here .
1. Install or upgrade LangChain
- pip
- yarn
- npm
- pnpm
pip install langchain_openai langchain_core
yarn add @langchain/openai @langchain/core
npm install @langchain/openai @langchain/core
pnpm add @langchain/openai @langchain/core
2. Configure your environment
- Shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
# The below examples use the OpenAI API, though it's not necessary in general
export OPENAI_API_KEY=<your-openai-api-key>
3. Log a trace
No extra code is needed to log a trace to LangSmith. Just run your LangChain code as you normally would.
- Python
- TypeScript
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Please respond to the user's request only based on the given context."),
("user", "Question: {question}\nContext: {context}")
])
model = ChatOpenAI(model="gpt-4o-mini")
output_parser = StrOutputParser()
chain = prompt | model | output_parser
question = "Can you summarize this morning's meetings?"
context = "During this morning's meeting, we solved all world conflict."
chain.invoke({"question": question, "context": context})
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant. Please respond to the user's request only based on the given context."],
["user", "Question: {question}\nContext: {context}"],
]);
const model = new ChatOpenAI({ modelName: "gpt-4o-mini" });
const outputParser = new StringOutputParser();
const chain = prompt.pipe(model).pipe(outputParser);
const question = "Can you summarize this morning's meetings?"
const context = "During this morning's meeting, we solved all world conflict."
await chain.invoke({ question: question, context: context });
4. View the trace
By default, the trace will be logged to the project with the name
default
. You can change the project you log to by following
the instructions
here
. An example of a trace logged using the above code is made public and can be viewed here .
1. Log a trace
Log a trace using the LangSmith API.
Here, we'll show you to use the requests
library in Python to
log a trace, but you can use any HTTP client in any language.
import openai
import requests
from datetime import datetime
from uuid import uuid4
def post_run(run_id, name, run_type, inputs, parent_id=None):
"""Function to post a new run to the API."""
data = {
"id": run_id.hex,
"name": name,
"run_type": run_type,
"inputs": inputs,
"start_time": datetime.utcnow().isoformat(),
}
if parent_id:
data["parent_run_id"] = parent_id.hex
requests.post(
"https://api.smith.langchain.com/runs",
json=data,
headers=headers
)
def patch_run(run_id, outputs):
"""Function to patch a run with outputs."""
requests.patch(
f"https://api.smith.langchain.com/runs/{run_id}",
json={
"outputs": outputs,
"end_time": datetime.utcnow().isoformat(),
},
headers=headers,
)
# Send your API Key in the request headers
headers = {"x-api-key": "<YOUR API KEY>"}
# This can be a user input to your app
question = "Can you summarize this morning's meetings?"
# This can be retrieved in a retrieval step
context = "During this morning's meeting, we solved all world conflict."
messages = [
{"role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context."},
{"role": "user", "content": f"Question: {question}\nContext: {context}"}
]
# Create parent run
parent_run_id = uuid4()
post_run(parent_run_id, "Chat Pipeline", "chain", {"question": question})
# Create child run
child_run_id = uuid4()
post_run(child_run_id, "OpenAI Call", "llm", {"messages": messages}, parent_run_id)
# Generate a completion
client = openai.Client()
chat_completion = client.chat.completions.create(model="gpt-4o-mini", messages=messages)
# End runs
patch_run(child_run_id, chat_completion.dict())
patch_run(parent_run_id, {"answer": chat_completion.choices[0].message.content})
2. View the trace
By default, the trace will be logged to the project with the name
default
. You can change the project you log to by following
the instructions
here
. An example of a trace logged using the above code is made public and can be viewed here .