Skip to main content

LangChain-specific guides

This sections contains guides that are specific to users using LangChain (both Python and JavaScript).

Grouping runs from multi-turn interactions

When using LangChain, you may want to group runs from multi-turn interactions together.

With chatbots, copilots, and other common LLM design patterns, users frequently interact with your model over multiple interactions. Each invocation of your model is logged as a separate trace, but you can group these traces together using metadata (see how to add metadata to a run above for more information). Below is a minimal example with LangChain, but the same idea applies when using the LangSmith SDK or API.

from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI

chain = (
ChatPromptTemplate.from_messages(
[
("system", "You are a helpful AI."),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{message}"),
]
)
| ChatOpenAI(model="gpt-3.5-turbo")
| StrOutputParser()
)

conversation_id = "101e8e66-9c68-4858-a1b4-3b0e3c51a933"

chat_history = []
message = HumanMessage(content="Hi there")
response = ""
for chunk in chain.stream(
{
"message": message,
"chat_history": chat_history,
},
config={"metadata": {"conversation_id": conversation_id}},
):
print(chunk, end="")
response += chunk
print()
chat_history.extend(
[
message,
AIMessage(content=response),
]
)

# ... Next message comes in
next_message = HumanMessage(content="I don't need much assistance, actually.")
for chunk in chain.stream(
{
"message": next_message,
"chat_history": chat_history,
},
config={"metadata": {"conversation_id": conversation_id}},
):
print(chunk, end="")
response += chunk

To view all the traces from that conversation in LangSmith, you can query the project using a metadata filter:

has(metadata, '{"conversation_id":"101e8e66-9c68-4858-a1b4-3b0e3c51a933"}')

This will return all traces with the specified conversation ID.

Getting a run ID from a LangChain call

In Typescript, the run ID is returned in the call response under the __run key. In python, we recommend using the run collector callback. Below is an example:

from langchain import chat_models, prompts, callbacks
chain = (
prompts.ChatPromptTemplate.from_template("Say hi to {name}")
| chat_models.ChatAnthropic()
)
with callbacks.collect_runs() as cb:
result = chain.invoke({"name": "Clara"})
run_id = cb.traced_runs[0].id
print(run_id)

For older versions of LangChain (<0.0.276), you can instruct the chain to return the run ID by specifying the `include_run_info=True` parameter to the call function:

from langchain.chat_models import ChatAnthropic
from langchain.chains import LLMChain

chain = LLMChain.from_string(ChatAnthropic(), "Say hi to {name}")
response = chain("Clara", include_run_info=True)
run_id = response["__run"].run_id
print(run_id)

For python LLMs/chat models, the run information is returned automatically when calling the `generate()` method. Example:

from langchain.chat_models import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate

chat_model = ChatAnthropic()

prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a cat"),
("human", "Hi"),
]
)
res = chat_model.generate(messages=[prompt.format_messages()])
res.run[0].run_id

or for LLMs

from langchain.llms import OpenAI

openai = OpenAI()
res = openai.generate(["You are a good bot"])
print(res.run[0].run_id)

If using the API or SDK, you can send the run ID as a parameter to RunTree or the traceable decorator.

Tracing without environment variables

Some situations don't permit the use of environment variables or don't expose process.env. This is mostly pertinent when running LangChain apps in certain JavaScript runtime environments. To add tracing in these situations, you can manually create the LangChainTracer callback and pass it to the chain, LLM, or other LangChain component, either when initializing or in the call itself. This is the same tactic used for changing the tracer project within a program.

Example:

from langchain.callbacks import LangChainTracer
from langchain_openai import ChatOpenAI
from langsmith import Client

callbacks = [
LangChainTracer(
project_name="YOUR_PROJECT_NAME_HERE",
client=Client(
api_url="https://api.smith.langchain.com",
api_key="YOUR_API_KEY_HERE"
)
)
]

llm = ChatOpenAI()
llm.invoke("Hello, world!", config={"callbacks": callbacks})

This tactic is also useful for when you have multiple chains running in a shared environment but want to log their run traces to different projects.

Ensuring all traces are submitted before exiting

In LangChain Python, LangSmith's tracing is done in a background thread to avoid obstructing your production application. This means that your process may end before all traces are successfully posted to LangSmith. This is especially prevalent in a serverless environment, where your VM may be terminated immediately once your chain or agent completes.

In LangChain JS, prior to @langchain/core version 0.3.0, the default was to block for a short period of time for the trace to finish due to the greater popularity of serverless environments. Versions >=0.3.0 have the same default as Python. You can explicitly make callbacks synchronous by setting the LANGCHAIN_CALLBACKS_BACKGROUND environment variable to "false" or asynchronous by setting it to "true". You can also check out this guide for more options for awaiting backgrounded callbacks in serverless environments.

For both languages, LangChain exposes methods to wait for traces to be submitted before exiting your application. Below is an example:

from langchain_openai import ChatOpenAI
from langchain.callbacks.tracers.langchain import wait_for_all_tracers

llm = ChatOpenAI()
try:
llm.invoke("Hello, World!")
finally:
wait_for_all_tracers()

Was this page helpful?


You can leave detailed feedback on GitHub.