Skip to main content

Trace with the Vercel AI SDK (JS/TS only)

You can use LangSmith to trace runs from the Vercel AI SDK using OpenTelemetry (OTEL). This guide will walk through an example.

note

OpenTelemetry support is only available in langsmith JS SDK version >=0.3.37. If you are on an older version of langsmith, see this page.

OpenTelemetry support is currently experimental.

0. Installation

Install the Vercel AI SDK and required OTEL packages. We use their OpenAI integration for the code snippets below, but you can use any of their other options as well.

npm install ai @ai-sdk/openai zod
npm install @opentelemetry/sdk-trace-base @opentelemetry/exporter-trace-otlp-proto @opentelemetry/context-async-hooks

1. Configure your environment

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>
export OTEL_ENABLED=true

# This example uses OpenAI, but you can use any LLM provider of choice

export OPENAI_API_KEY=<your-openai-api-key>

2. Log a trace

Node.js

To start tracing, you will need to import and call the initializeOTEL method at the start of your code:

import { initializeOTEL } from "langsmith/experimental/otel/setup";

const { DEFAULT_LANGSMITH_SPAN_PROCESSOR } = initializeOTEL();

Afterwards, add the experimental_telemetry argument to your AI SDK calls that you want to trace.

info

Do not forget to call await DEFAULT_LANGSMITH_SPAN_PROCESSOR.shutdown(); before your application shuts down in order to flush any remaining traces to LangSmith.

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

let result;

try {
result = await generateText({
model: openai("gpt-4.1-nano"),
prompt: "Write a vegetarian lasagna recipe for 4 people.",
experimental_telemetry: {
isEnabled: true,
},
});
} finally {
await DEFAULT_LANGSMITH_SPAN_PROCESSOR.shutdown();
}

You should see a trace in your LangSmith dashboard like this one.

You can also trace runs with tool calls:

import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

await generateText({
model: openai("gpt-4.1-nano"),
messages: [
{
role: "user",
content: "What are my orders and where are they? My user ID is 123",
},
],
tools: {
listOrders: tool({
description: "list all orders",
parameters: z.object({ userId: z.string() }),
execute: async ({ userId }) =>
`User ${userId} has the following orders: 1`,
}),
viewTrackingInformation: tool({
description: "view tracking information for a specific order",
parameters: z.object({ orderId: z.string() }),
execute: async ({ orderId }) =>
`Here is the tracking information for ${orderId}`,
}),
},
experimental_telemetry: {
isEnabled: true,
},
maxSteps: 10,
});

Which results in a trace like this one.

With traceable

You can wrap traceable calls around or within AI SDK tool calls. If you do so, we recommend you initialize a LangSmith client instance that you pass into each traceable, then call client.awaitPendingTraceBatches(); to ensure all traces flush. If you do this, you do not need to manually call shutdown() or forceFlush() on the DEFAULT_LANGSMITH_SPAN_PROCESSOR. Here's an example:

import { initializeOTEL } from "langsmith/experimental/otel/setup";

initializeOTEL();

import { Client } from "langsmith";
import { traceable } from "langsmith/traceable";
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

const client = new Client();

const wrappedText = traceable(
async (content: string) => {
const { text } = await generateText({
model: openai("gpt-4.1-nano"),
messages: [{ role: "user", content }],
tools: {
listOrders: tool({
description: "list all orders",
parameters: z.object({ userId: z.string() }),
execute: async ({ userId }) => {
const getOrderNumber = traceable(
async () => {
return "1234";
},
{ name: "getOrderNumber" }
);
const orderNumber = await getOrderNumber();
return `User ${userId} has the following order: ${orderNumber}`;
},
}),
},
experimental_telemetry: {
isEnabled: true,
},
maxSteps: 10,
});

return { text };
},
{ name: "parentTraceable", client }
);

let result;

try {
result = await wrappedText("What are my orders?");
} finally {
await client.awaitPendingTraceBatches();
}

The resulting trace will look like this.

Next.js

The default OTEL setup for Next.js will trace all routes, including those that do not contain LLM traces. We instead suggest manually instrumenting specific routes by creating and passing a tracer as shown below:

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { traceable } from "langsmith/traceable";

import { initializeOTEL } from "langsmith/experimental/otel/setup";
import { LangSmithOTLPTraceExporter } from "langsmith/experimental/otel/exporter";
import {
BatchSpanProcessor,
BasicTracerProvider,
} from "@opentelemetry/sdk-trace-base";
import { AsyncHooksContextManager } from "@opentelemetry/context-async-hooks";
import { context } from "@opentelemetry/api";

import { after } from "next/server";

const exporter = new LangSmithOTLPTraceExporter();
const processor = new BatchSpanProcessor(exporter);
const contextManager = new AsyncHooksContextManager();

contextManager.enable();
context.setGlobalContextManager(contextManager);

const provider = new BasicTracerProvider({
spanProcessors: [processor],
});

// Pass instantiated provider and context manager to LangSmith
initializeOTEL({
globalTracerProvider: provider,
globalContextManager: contextManager,
});

const tracer = provider.getTracer("ai-sdk-telemetry");

export async function GET() {
after(async () => {
await provider.shutdown();
});

const wrappedText = traceable(
async (content: string) => {
const { text } = await generateText({
model: openai("gpt-4.1-nano"),
messages: [{ role: "user", content }],
experimental_telemetry: {
isEnabled: true,
tracer,
},
maxSteps: 10,
});

return { text };
},
{ name: "parentTraceable", tracer }
);

const { text } = await wrappedText("Why is the sky blue?");

return new Response(text);
}

Sentry

If you're using Sentry, you can attach the LangSmith trace exporter to Sentry's default OpenTelemetry instrumentation as show in the example below.

caution

At time of writing, Sentry only supports OTEL v1 packages. LangSmith supports both v1 and v2, but you MUST make sure you install OTEL v1 packages in order to make instrumentation work.

npm install @opentelemetry/sdk-trace-base@1.30.1 @opentelemetry/exporter-trace-otlp-proto@0.57.2 @opentelemetry/context-async-hooks@1.30.1
import { initializeOTEL } from "langsmith/experimental/otel/setup";

import { LangSmithOTLPTraceExporter } from "langsmith/experimental/otel/exporter";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";

import { traceable } from "langsmith/traceable";
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

import * as Sentry from "@sentry/node";
import { Client } from "langsmith";

const exporter = new LangSmithOTLPTraceExporter();

const spanProcessor = new BatchSpanProcessor(exporter);

const sentry = Sentry.init({
dsn: "...",
tracesSampleRate: 1.0,
openTelemetrySpanProcessors: [spanProcessor],
});

initializeOTEL({
globalTracerProvider: sentry?.traceProvider,
});

const wrappedText = traceable(
async (content: string) => {
const { text } = await generateText({
model: openai("gpt-4.1-nano"),
messages: [{ role: "user", content }],
experimental_telemetry: {
isEnabled: true,
},
maxSteps: 10,
});

return { text };
},
{ name: "parentTraceable" }
);

let result;

try {
result = await wrappedText("What color is the sky?");
} finally {
await sentry?.traceProvider?.shutdown();
}

Add other metadata

You can add other metadata to your traces to help organize and filter them in the LangSmith UI:

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

await generateText({
model: openai("gpt-4.1-nano"),
prompt: "Write a vegetarian lasagna recipe for 4 people.",
experimental_telemetry: {
isEnabled: true,
metadata: { userId: "123", language: "english" },
},
});

Metadata will be visible in your LangSmith dashboard and can be used to filter and search for specific traces. Note that AI SDK propagates metadata on internal child spans as well.

Customize run name

You can customize the run name by passing a metadata key named ls_run_name into experimental_telemetry.

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

await generateText({
model: openai("gpt-4o-mini"),
prompt: "Write a vegetarian lasagna recipe for 4 people.",
experimental_telemetry: {
isEnabled: true,
metadata: {
ls_run_name: "my-custom-run-name",
},
},
});

Was this page helpful?


You can leave detailed feedback on GitHub.