Skip to main content

Trace without setting environment variables

As mentioned in other guides, the following environment variables allow you to configure tracing enabled, the api endpoint, the api key, and the tracing project:

  • LANGCHAIN_TRACING_V2
  • LANGCHAIN_API_KEY
  • LANGCHAIN_ENDPOINT
  • LANGCHAIN_PROJECT

In some environments, it is not possible to set environment variables. In these cases, you can set the tracing configuration programmatically.

The recommended way to do this in Python is to use the trace context manager and pass in a configured langsmith.Client object.

import openai
from langsmith import trace
from langsmith import Client, traceable
from langsmith.wrappers import wrap_openai

langsmith_client = Client(
api_key="YOUR_API_KEY", # This can be retrieved from a secrets manager
api_url="https://api.smith.langchain.com", # Update appropriately for self-hosted installations
)

client = wrap_openai(openai.Client())

@traceable(run_type="tool", name="Retrieve Context")
def my_tool(question: str) -> str:
return "During this morning's meeting, we solved all world conflict."

def chat_pipeline(question: str):
context = my_tool(question)
messages = [
{ "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ "role": "user", "content": f"Question: {question}
Context: {context}"}
]
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo", messages=messages
)
return chat_completion.choices[0].message.content

app_inputs = {"input": "Can you summarize this morning's meetings?"}

with trace("Chat Pipeline", "chain", project_name="test-no-env", inputs=app_inputs, client=langsmith_client) as rt:
output = chat_pipeline("Can you summarize this morning's meetings?")
rt.end(outputs={"output": output})

Was this page helpful?


You can leave detailed feedback on GitHub.