Skip to main content

Python Integrations

LangSmith allows you to log traces in various ways.

LangChain

To log traces with LangChain, all you need to do is set an environment variable.

export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_TRACING_V2=true

After that, you can use LangChain as you normally would and all traces will get logged to LangSmith!

OpenAI SDK

We provide a convenient wrapper for the OpenAI SDK.

In order to use, you first need to set your LangSmith API key.

export LANGCHAIN_API_KEY=<your-api-key>

Next, you will need to install the LangSmith SDK:

pip install -U langsmith

After that, you can wrap the OpenAI client:

from openai import OpenAI
from langsmith import wrappers

client = wrappers.wrap_openai(OpenAI())

Now, you can use the OpenAI client as you normally would, but now everything is logged to LangSmith!

client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
)

Oftentimes, you use the OpenAI client inside of other functions. You can get nested traces by using this wrapped client and decorating those functions with @traceable. See this documentation for more documentation how to use this decorator

from langsmith import traceable

@traceable
def my_function(text: str):
return client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Say {text}"}],
)

my_function("hello world")

Instructor

We provide a convenient integration with Instructor.

In order to use, you first need to set your LangSmith API key.

export LANGCHAIN_API_KEY=<your-api-key>

Next, you will need to install the LangSmith SDK:

pip install -U langsmith

After that, you can wrap the OpenAI client:

from openai import OpenAI
from langsmith import wrappers

client = wrappers.wrap_openai(OpenAI())

After this, you can patch the wrapped OpenAI client using instructor:

import instructor

client = instructor.patch(client)

Now, you can use instructor as you normally would, but now everything is logged to LangSmith!

from pydantic import BaseModel


class UserDetail(BaseModel):
name: str
age: int


user = client.chat.completions.create(
model="gpt-3.5-turbo",
response_model=UserDetail,
messages=[
{"role": "user", "content": "Extract Jason is 25 years old"},
]
)

Oftentimes, you use instructor inside of other functions. You can get nested traces by using this wrapped client and decorating those functions with @traceable. See this documentation for more documentation how to use this decorator

# You can customize the run name with the `name` keyword argument
@traceable(name="Extract User Details")
def my_function(text: str) -> UserDetail:
return client.chat.completions.create(
model="gpt-3.5-turbo",
response_model=UserDetail,
messages=[
{"role": "user", "content": f"Extract {text}"},
]
)


my_function("Jason is 25 years old")

Was this page helpful?