How to customize attributes of traces
Oftentimes, you will want to customize various attributes of the traces you log to LangSmith.
Logging to a specific project
As mentioned in the Concepts section, LangSmith uses the concept of a Project
to group traces. If left unspecified, the tracer project is set to default
. You can set the LANGCHAIN_PROJECT
environment variable to configure a custom project name for an entire application run. This should be done before executing your program.
export LANGCHAIN_PROJECT="My Project"
Changing the destination project at runtime
When global environment variables are too broad, you can also set the project name at program runtime. This is useful when you want to log traces to different projects within the same application.
- Python
- TypeScript
- LangChain (Python)
- LangChain (JS)
import openai
from langsmith import traceable
from langsmith.run_trees import RunTree
client = openai.Client()
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
# Use the @traceable decorator with the 'project_name' parameter to log traces to LangSmith
# Ensure that the LANGCHAIN_TRACING_V2 environment variables are set for @traceable to work
@traceable(
run_type="llm",
name="OpenAI Call Decorator",
)
def call_openai(
messages: list[dict], model: str = "gpt-3.5-turbo"
) -> str:
return client.chat.completions.create(
model=model,
messages=messages,
).choices[0].message.content
# You can specify the Project via the project_name parameter
call_openai(
messages,
langsmith_extra={"project_name": "My Project"},
)
# The wrapped OpenAI client accepts all the same langsmith_extra parameters
# as @traceable decorated functions, and logs traces to langsmith automatically
from langsmith import wrappers
wrapped_client = wrappers.wrap_openai(client)
wrapped_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
langsmith_extra={"project_name": "My Project"},
)
# Alternatively, create a RunTree object
# You can set the project name using the project_name parameter
rt = RunTree(
run_type="llm",
name="OpenAI Call RunTree",
inputs={"messages": messages},
project_name="My Project"
)
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
)
# End and submit the run
await rt.end(outputs=chat_completion)
rt.post()
import OpenAI from "openai";
import { RunTree } from "langsmith";
const client = new OpenAI()
const messages = [
{role: "system", content: "You are a helpful assistant."},
{role: "user", content: "Hello!"}
]
// Create a RunTree object
// You can set the project name using the project_name parameter
const rt = new RunTree({
run_type: "llm",
name: "OpenAI Call RunTree",
inputs: { messages },
project_name: "My Project"
})
await rt.postRun();
const chatCompletion = await client.chat.completions.create({
model: "gpt-3.5-turbo",
messages: messages,
});
// End and submit the run
await rt.end(chatCompletion);
await rt.patchRun();
# You can set the project name for a specific tracer instance:
from langchain.callbacks.tracers import LangChainTracer
tracer = LangChainTracer(project_name="My Project")
chain.invoke({"query": "How many people live in canada as of 2023?"}, config={"callbacks": [tracer]})
# LangChain python also supports a context manager for tracing a specific block of code.
# You can set the project name using the project_name parameter.
from langchain_core.tracers.context import tracing_v2_enabled
with tracing_v2_enabled(project_name="My Project"):
chain.invoke({"query": "How many people live in canada as of 2023?"})
// You can set the project name for a specific tracer instance:
import { LangChainTracer } from "langchain/callbacks";
const tracer = new LangChainTracer({ projectName: "My Project" });
await chain.invoke(
{
query: "How many people live in canada as of 2023?"
},
{ callbacks: [tracer] }
);