Skip to main content

How to customize attributes of traces

Oftentimes, you will want to customize various attributes of the traces you log to LangSmith.

Logging to a specific project

As mentioned in the Concepts section, LangSmith uses the concept of a Project to group traces. If left unspecified, the tracer project is set to default. You can set the LANGCHAIN_PROJECT environment variable to configure a custom project name for an entire application run. This should be done before executing your program.

export LANGCHAIN_PROJECT="My Project"

Changing the destination project at runtime

When global environment variables are too broad, you can also set the project name at program runtime. This is useful when you want to log traces to different projects within the same application.

import openai
from langsmith import traceable
from langsmith.run_trees import RunTree

client = openai.Client()

messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]

# Use the @traceable decorator with the 'project_name' parameter to log traces to LangSmith
# Ensure that the LANGCHAIN_TRACING_V2 environment variables are set for @traceable to work
@traceable(
run_type="llm",
name="OpenAI Call Decorator",
)
def call_openai(
messages: list[dict], model: str = "gpt-3.5-turbo"
) -> str:
return client.chat.completions.create(
model=model,
messages=messages,
).choices[0].message.content

# You can specify the Project via the project_name parameter
call_openai(
messages,
langsmith_extra={"project_name": "My Project"},
)

# The wrapped OpenAI client accepts all the same langsmith_extra parameters
# as @traceable decorated functions, and logs traces to langsmith automatically
from langsmith import wrappers
wrapped_client = wrappers.wrap_openai(client)
wrapped_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
langsmith_extra={"project_name": "My Project"},
)


# Alternatively, create a RunTree object
# You can set the project name using the project_name parameter
rt = RunTree(
run_type="llm",
name="OpenAI Call RunTree",
inputs={"messages": messages},
project_name="My Project"
)
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
)
# End and submit the run
rt.end(outputs=chat_completion)
rt.post()

Adding metadata and tags to traces

LangSmith supports sending arbitrary metadata and tags along with traces. This is useful for associating additional information with a trace, such as the environment in which it was executed, or the user who initiated it. For more information on metadata and tags, see the Concepts page. For information on how to query traces and runs by metadata and tags, see the Querying Traces page.

import openai
from langsmith import traceable
from langsmith.run_trees import RunTree

client = openai.Client()

messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]

# Use the @traceable decorator with tags and metadata
# Ensure that the LANGCHAIN_TRACING_V2 environment variables are set for @traceable to work
@traceable(
run_type="llm",
name="OpenAI Call Decorator",
tags=["my-tag"],
metadata={"my-key": "my-value"}
)
def call_openai(
messages: list[dict], model: str = "gpt-3.5-turbo"
) -> str:
return client.chat.completions.create(
model=model,
messages=messages,
).choices[0].message.content

call_openai(
messages,
# You can provide tags and metadata at invocation time
# via the langsmith_extra parameter
langsmith_extra={"tags": ["my-other-tag"], "metadata": {"my-other-key": "my-value"}}
)

# Alternatively, you can create a RunTree object with tags and metadata
rt = RunTree(
run_type="llm",
name="OpenAI Call RunTree",
inputs={"messages": messages},
tags=["my-tag"],
extra={"metadata": {"my-key": "my-value"}}
)
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
)
# End and submit the run
rt.end(outputs=chat_completion)
rt.post()

Customizing the run name

When you create a run, you can specify a name for the run. This name is used to identify the run in LangSmith and can be used to filter and group runs. The name is also used as the title of the run in the LangSmith UI.

import openai
from langsmith import traceable
from langsmith.run_trees import RunTree

client = openai.Client()

messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]

# Use the @traceable decorator with the name parameter
# Ensure that the LANGCHAIN_TRACING_V2 environment variables are set for @traceable to work
@traceable(
run_type="llm",
name="OpenAI Call Decorator",
)
def call_openai(
messages: list[dict], model: str = "gpt-3.5-turbo"
) -> str:
return client.chat.completions.create(
model=model,
messages=messages,
).choices[0].message.content

call_openai(messages)

# Alternatively, use the name parameter of the RunTree object
rt = RunTree(
run_type="llm",
name="OpenAI Call RunTree",
inputs={"messages": messages},
)
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
)
# End and submit the run
rt.end(outputs=chat_completion)
rt.post()

For more examples of with LangChain, check out the recipe on customizing run names.

Updating a run

The following fields can be updated when patching a run with the SDK or API.

  • end_time: datetime.datetime
  • error: str | None
  • outputs: dict | None
  • events: list[dict] | None

Masking inputs and outputs

In some situations, you may need to hide the inputs and outputs of your traces for privacy or security reasons. LangSmith provides a way to filter the inputs and outputs of your traces before they are sent to the LangSmith backend, so our servers never see the original values.

If you want to completely hide the inputs and outputs of your traces, you can set the following environment variables when running your application:

LANGCHAIN_HIDE_INPUTS=true
LANGCHAIN_HIDE_OUTPUTS=true

This works for both the LangSmith SDK and LangChain.

You can also customize and override this behavior for a given Client instance. This can be done by setting the hide_inputs and hide_outputs parameters on the Client object.

import openai
from langsmith import Client
from langsmith.wrappers import wrap_openai

openai_client = wrap_openai(openai.Client())
langsmith_client = Client(
hide_inputs=lambda inputs: {}, hide_outputs=lambda outputs: {}
)

# The linked run will have its metadata present, but the inputs will be hidden
openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
],
langsmith_extra={"client": langsmith_client},
)

# The linked run will not have hidden inputs and outputs
openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
],
)

Was this page helpful?