Skip to main content

Annotate code for tracing

There are several ways to log traces to LangSmith.

Use @traceable / traceable

LangSmith makes it easy to log traces with minimal changes to your existing code with the @traceable decorator in Python and traceable function in TypeScript.

note

The LANGCHAIN_TRACING_V2 environment variable must be set to 'true' in order for traces to be logged to LangSmith, even when using @traceable or traceable. This allows you to toggle tracing on and off without changing your code.

Additionally, you will need to set the LANGCHAIN_API_KEY environment variable to your API key (see Setup for more information).

By default, the traces will be logged to a project named default. To log traces to a different project, see this section.

The @traceable decorator is a simple way to log traces from the LangSmith Python SDK. Simply decorate any function with @traceable.

from langsmith import traceable
from openai import Client

openai = Client()

@traceable
def format_prompt(subject):
return [
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": f"What's a good name for a store that sells {subject}?"
}
]

@traceable(run_type="llm")
def invoke_llm(messages):
return openai.chat.completions.create(
messages=messages, model="gpt-3.5-turbo", temperature=0
)

@traceable
def parse_output(response):
return response.choices[0].message.content

@traceable
def run_pipeline():
messages = format_prompt("colorful socks")
response = invoke_llm(messages)
return parse_output(response)

run_pipeline()

Wrap the OpenAI client

The wrap_openai/wrapOpenAI methods in Python/TypeScript allow you to wrap your OpenAI client in order to automatically log traces -- no decorator or function wrapping required! The wrapper works seamlessly with the @traceable decorator or traceable function and you can use both in the same application.

Tool calls are automatically rendered

note

The LANGCHAIN_TRACING_V2 environment variable must be set to 'true' in order for traces to be logged to LangSmith, even when using wrap_openai or wrapOpenAI. This allows you to toggle tracing on and off without changing your code.

Additionally, you will need to set the LANGCHAIN_API_KEY environment variable to your API key (see Setup for more information).

By default, the traces will be logged to a project named default. To log traces to a different project, see this section.

import openai
from langsmith import traceable
from langsmith.wrappers import wrap_openai

client = wrap_openai(openai.Client())

@traceable(run_type="tool", name="Retrieve Context")
def my_tool(question: str) -> str:
return "During this morning's meeting, we solved all world conflict."

@traceable(name="Chat Pipeline")
def chat_pipeline(question: str):
context = my_tool(question)
messages = [
{ "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ "role": "user", "content": f"Question: {question}\nContext: {context}"}
]
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo", messages=messages
)
return chat_completion.choices[0].message.content

chat_pipeline("Can you summarize this morning's meetings?")

Use the RunTree API

Another, more explicit way to log traces to LangSmith is via the RunTree API. This API allows you more control over your tracing - you can manually create runs and children runs to assemble your trace. You still need to set your LANGCHAIN_API_KEY, but LANGCHAIN_TRACING_V2 is not necessary for this method.

import openai
from langsmith.run_trees import RunTree
# This can be a user input to your app
question = "Can you summarize this morning's meetings?"
# Create a top-level run
pipeline = RunTree(
name="Chat Pipeline",
run_type="chain",
inputs={"question": question}
)
# This can be retrieved in a retrieval step
context = "During this morning's meeting, we solved all world conflict."
messages = [
{ "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ "role": "user", "content": f"Question: {question}\nContext: {context}"}
]
# Create a child run
child_llm_run = pipeline.create_child(
name="OpenAI Call",
run_type="llm",
inputs={"messages": messages},
)
# Generate a completion
client = openai.Client()
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo", messages=messages
)
# End the runs and log them
child_llm_run.end(outputs=chat_completion)
child_llm_run.post()
pipeline.end(outputs={"answer": chat_completion.choices[0].message.content})
pipeline.post()

Was this page helpful?