how to integrate with Node.js ai sdk #518
-
|
how to integrate with Node.js ai sdk there is no clear docs about that https://v5.ai-sdk.dev/providers/observability/langwatch#configuration if i declare the below code how will ai sdk will know that it has to use langExporter ? import { LangWatchExporter } from 'langwatch';
const sdk = new NodeSDK({
traceExporter: new LangWatchExporter({
apiKey: process.env.LANGWATCH_API_KEY,
}),
// ...
});
const result = await generateText({
model: openai('gpt-4o-mini'),
prompt:
'Explain why a chicken would make a terrible astronaut, be creative and humorous about it.',
experimental_telemetry: {
isEnabled: true,
// optional metadata
metadata: {
userId: 'myuser-123',
threadId: 'mythread-123',
},
},
}); |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
|
Important The new SDK and docs are now live. Head over to the docs to get started. Hey there, thank you so much for bringing this to our attention! To answer your question, OpenTelemetry is structured around a singleton instance. So when you set up the NodeSDK, the trace exporter you specify will be automatically utilized by any spans/traces created globally, including those initiated by the Vercel AI SDK. This configuration is designed to work smoothly out of the box! Here's a sneak peek at some exciting updates we'll be rolling out in the next few days: Library update: GitHub - Pull Request #500 Here’s how the API is shaping up: import { setup } from "langwatch/node";
import { getLangWatchTracer } from "langwatch";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
// Initialize LangWatch
await setup();
const tracer = getLangWatchTracer("chatbot-service");
async function generateMessage() {
return await tracer.withActiveSpan("generate-message", async (span) => {
// Input/output will automatically be recorded by the span created by Vercel AI
const response = await generateText({
model: openai("gpt-4.1-mini"),
prompt: "Write a synopsis of the Taylor Swift Eras's tour.",
experimental_telemetry: { isEnabled: true },
});
// Record some custom metrics about usage
span.setMetrics({
promptTokens: response.usage?.promptTokens || 0,
completionTokens: response.usage?.completionTokens || 0,
cost: response.usage?.cost || 0
});
});
}
await generateMessage();We're in the final stages of refining our new TypeScript SDK, built atop OpenTelemetry. This update aims to offer native support and a sleek API that makes integrating observability straightforward. You'll have the option to let us handle the heavy lifting or customize your OpenTelemetry setup according to your needs. Stay tuned! We'll keep you posted once everything's officially launched and ready for your use. We can't wait for you to try it out 😊. |
Beta Was this translation helpful? Give feedback.
Important
The new SDK and docs are now live. Head over to the docs to get started.
Hey there, thank you so much for bringing this to our attention!
To answer your question, OpenTelemetry is structured around a singleton instance. So when you set up the NodeSDK, the trace exporter you specify will be automatically utilized by any spans/traces created globally, including those initiated by the Vercel AI SDK. This configuration is designed to work smoothly out of the box!
Here's a sneak peek at some exciting updates we'll be rolling out in the next few days:
Library update: GitHub - Pull Request #500
Docs update: GitHub - Pull Request #58
Here’s how the API is shaping up: