Annotate code for tracing
If you've decided you no longer want to trace your runs, you can remove the LANGSMITH_TRACING
environment variable.
Note that this does not affect the RunTree
objects or API users, as these are meant to be low-level and not affected by the tracing toggle.
There are several ways to log traces to LangSmith.
If you are using LangChain (either Python or JS/TS), you can skip this section and go directly to the LangChain-specific instructions.
Use @traceable
/ traceable
LangSmith makes it easy to log traces with minimal changes to your existing code with the @traceable
decorator in Python and traceable
function in TypeScript.
The LANGSMITH_TRACING
environment variable must be set to 'true'
in order for traces to be logged to LangSmith, even when using @traceable
or traceable
. This allows you to toggle tracing on and off without changing your code.
Additionally, you will need to set the LANGSMITH_API_KEY
environment variable to your API key (see Setup for more information).
By default, the traces will be logged to a project named default
.
To log traces to a different project, see this section.
- Python
- TypeScript
The @traceable
decorator is a simple way to log traces from the LangSmith Python SDK. Simply decorate any function with @traceable
.
from langsmith import traceable
from openai import Client
openai = Client()
@traceable
def format_prompt(subject):
return [
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": f"What's a good name for a store that sells {subject}?"
}
]
@traceable(run_type="llm")
def invoke_llm(messages):
return openai.chat.completions.create(
messages=messages, model="gpt-4o-mini", temperature=0
)
@traceable
def parse_output(response):
return response.choices[0].message.content
@traceable
def run_pipeline():
messages = format_prompt("colorful socks")
response = invoke_llm(messages)
return parse_output(response)
run_pipeline()
The traceable
function is a simple way to log traces from the LangSmith TypeScript SDK. Simply wrap any function with traceable
.
Note that when wrapping a sync function with traceable
, (e.g. formatPrompt
in the example below), you should use the await
keyword when calling it to ensure the trace is logged correctly.
import { traceable } from "langsmith/traceable";
import OpenAI from "openai";
const openai = new OpenAI();
const formatPrompt = traceable(
(subject: string) => {
return [
{
role: "system" as const,
content: "You are a helpful assistant.",
},
{
role: "user" as const,
content: `What's a good name for a store that sells ${subject}?`,
},
];
},
{ name: "formatPrompt" }
);
const invokeLLM = traceable(
async ({ messages }: { messages: { role: string; content: string }[] }) => {
return openai.chat.completions.create({
model: "gpt-4o-mini",
messages: messages,
temperature: 0,
});
},
{ run_type: "llm", name: "invokeLLM" }
);
const parseOutput = traceable(
(response: any) => {
return response.choices[0].message.content;
},
{ name: "parseOutput" }
);
const runPipeline = traceable(
async () => {
const messages = await formatPrompt("colorful socks");
const response = await invokeLLM({ messages });
return parseOutput(response);
},
{ name: "runPipeline" }
);
await runPipeline();
Use the trace
context manager (Python only)
In Python, you can use the trace
context manager to log traces to LangSmith. This is useful in situations where:
- You want to log traces for a specific block of code.
- You want control over the inputs, outputs, and other attributes of the trace.
- It is not feasible to use a decorator or wrapper.
- Any or all of the above.
The context manager integrates seamlessly with the traceable
decorator and wrap_openai
wrapper, so you can use them together in the same application.
import openai
import langsmith as ls
from langsmith.wrappers import wrap_openai
client = wrap_openai(openai.Client())
@ls.traceable(run_type="tool", name="Retrieve Context")
def my_tool(question: str) -> str:
return "During this morning's meeting, we solved all world conflict."
def chat_pipeline(question: str):
context = my_tool(question)
messages = [
{ "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ "role": "user", "content": f"Question: {question}\nContext: {context}"}
]
chat_completion = client.chat.completions.create(
model="gpt-4o-mini", messages=messages
)
return chat_completion.choices[0].message.content
app_inputs = {"input": "Can you summarize this morning's meetings?"}
with ls.trace("Chat Pipeline", "chain", project_name="my_test", inputs=app_inputs) as rt:
output = chat_pipeline("Can you summarize this morning's meetings?")
rt.end(outputs={"output": output})
Wrap the OpenAI client
The wrap_openai
/wrapOpenAI
methods in Python/TypeScript allow you to wrap your OpenAI client in order to automatically log traces -- no decorator or function wrapping required!
Using the wrapper ensures that messages, including tool calls and multimodal content blocks will be rendered nicely in LangSmith.
Also note that the wrapper works seamlessly with the @traceable
decorator or traceable
function and you can use both in the same application.
The LANGSMITH_TRACING
environment variable must be set to 'true'
in order for traces to be logged to LangSmith, even when using wrap_openai
or wrapOpenAI
. This allows you to toggle tracing on and off without changing your code.
Additionally, you will need to set the LANGSMITH_API_KEY
environment variable to your API key (see Setup for more information).
By default, the traces will be logged to a project named default
.
To log traces to a different project, see this section.
- Python
- TypeScript
import openai
from langsmith import traceable
from langsmith.wrappers import wrap_openai
client = wrap_openai(openai.Client())
@traceable(run_type="tool", name="Retrieve Context")
def my_tool(question: str) -> str:
return "During this morning's meeting, we solved all world conflict."
@traceable(name="Chat Pipeline")
def chat_pipeline(question: str):
context = my_tool(question)
messages = [
{ "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ "role": "user", "content": f"Question: {question}\nContext: {context}"}
]
chat_completion = client.chat.completions.create(
model="gpt-4o-mini", messages=messages
)
return chat_completion.choices[0].message.content
chat_pipeline("Can you summarize this morning's meetings?")
import OpenAI from "openai";
import { traceable } from "langsmith/traceable";
import { wrapOpenAI } from "langsmith/wrappers";
const client = wrapOpenAI(new OpenAI());
const myTool = traceable(async (question: string) => {
return "During this morning's meeting, we solved all world conflict.";
}, { name: "Retrieve Context", run_type: "tool" });
const chatPipeline = traceable(async (question: string) => {
const context = await myTool(question);
const messages = [
{
role: "system",
content:
"You are a helpful assistant. Please respond to the user's request only based on the given context.",
},
{ role: "user", content: `Question: ${question} Context: ${context}` },
];
const chatCompletion = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: messages,
});
return chatCompletion.choices[0].message.content;
}, { name: "Chat Pipeline" });
await chatPipeline("Can you summarize this morning's meetings?");