Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(vercel): nested executions within parent trace #1014

Merged
merged 2 commits into from
Nov 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 51 additions & 5 deletions pages/docs/integrations/vercel-ai-sdk.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -174,21 +174,63 @@ We created a sample repository ([langfuse/langfuse-vercel-ai-nextjs-example](htt

## Customization

### Group multiple executions in one trace

You can open a Langfuse trace and pass the trace ID to AI SDK calls to group multiple execution spans under one trace. The passed name in functionId will be the root span name of the respective execution.

```typescript
import { randomUUID } from "crypto";
import { Langfuse } from "langfuse";

const langfuse = new Langfuse();
const parentTraceId = randomUUID();

langfuse.trace({
id: parentTraceId,
name: "holiday-traditions",
});

for (let i = 0; i < 3; i++) {
const result = await generateText({
model: openai("gpt-3.5-turbo"),
maxTokens: 50,
prompt: "Invent a new holiday and describe its traditions.",
experimental_telemetry: {
isEnabled: true,
functionId: `holiday-tradition-${i}`,
metadata: {
langfuseTraceId: parentTraceId,
langfuseUpdateParent: false, // Do not update the parent trace with execution results
},
},
});

console.log(result.text);
}

await langfuse.flushAsync();
await sdk.shutdown();
```

The resulting trace hierarchy will be:

![Vercel nested trace in Langfuse UI](/images/docs/vercel-nested-trace.png)

### Disable Tracking of Input/Output

By default, the exporter captures the input and output of each request. You can disable this behavior by setting the `recordInputs` and `recordOutputs` options to `false`.

### Link Langfuse prompts to traces

You can link Langfuse prompts to Vercel AI SDK generations by setting the `langfusePrompt` property in the `metadata` field:
You can link Langfuse prompts to Vercel AI SDK generations by setting the `langfusePrompt` property in the `metadata` field:

```typescript
import { generateText } from "ai"
import { Langfuse } from "langfuse"
import { generateText } from "ai";
import { Langfuse } from "langfuse";

const langfuse = new Langfuse()
const langfuse = new Langfuse();

const fetchedPrompt = await langfuse.getPrompt('my-prompt')
const fetchedPrompt = await langfuse.getPrompt("my-prompt");

const result = await generateText({
model: openai("gpt-4o"),
Expand Down Expand Up @@ -243,6 +285,10 @@ new LangfuseExporter({ debug: true });
- set `skipOpenTelemetrySetup: true` in Sentry.init
- follow Sentry's docs on how to manually set up Sentry with OTEL

### Short-lived environments

In short-lived environments such as Vercel Cloud Functions, AWS Lambdas etc. you may force an export and flushing of spans after function execution and prior to environment freeze or shutdown by awaiting a call to the `forceFlush` method on the LangfuseExporter instance.

## Learn more

See the [telemetry documentation](https://sdk.vercel.ai/docs/ai-sdk-core/telemetry) of the Vercel AI SDK for more information.
Expand Down
Binary file added public/images/docs/vercel-nested-trace.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.