Claudette Patterns for TypeScript: A Guide to the AI SDK

TL;DR: Claudette gives Python developers an ergonomic way to work with Claude, featuring a stateful chat object, an automatic tool loop, and structured outputs. This guide shows how to recreate those same powerful patterns in TypeScript using the Vercel AI SDK.

Acknowledgement: Claudette is an Answer.AI project that teaches through literate notebooks. Credit to its maintainers for a clean, well‑explained design. (claudette.answer.ai)

Recreating Claudette's Core Features in TypeScript

Pattern Claudette (Python) AI SDK (TypeScript) Implementation
Multi-step Tools A Chat.toolloop() runs calls until a task is done. Use generateText with a stopWhen condition.
Structured Output Client.structured() returns a typed Python object. Use generateObject with a Zod or JSON schema.
Prompt Caching Helpers mark cacheable parts of a prompt. Use providerOptions to enable caching with a TTL.
Server Tools Wires up tools like Text Editor and Web Search. Attach provider tools for Text Editor, Web Search, etc.

1. Pattern: Automatic Multi-step Tool Use

A key feature in Claudette is the toolloop, which automatically executes tool calls and feeds the results back to the model until a task is complete.

You can build the same loop in the AI SDK by defining tools and using generateText or streamText with a stopWhen condition. This tells the SDK to re-invoke the model with tool results until your condition is met, preventing runaway loops.

// pnpm add ai @ai-sdk/anthropic zod
import { streamText, tool, stepCountIs } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
import { z } from 'zod';

const add = tool({
  description: 'Add two integers',
  inputSchema: z.object({ a: z.number(), b: z.number() }),
  execute: async ({ a, b }) => a + b,
});

const result = await streamText({
  model: anthropic('claude-4-sonnet-20250514'),
  tools: { add },
  stopWhen: stepCountIs(5), // Stop after 5 steps
  prompt: 'What is (12345 + 67890) * 2? Use tools and explain.',
});

for await (const chunk of result.textStream) process.stdout.write(chunk);

2. Pattern: Strongly Typed Structured Outputs

Claudette's structured() method is a convenient way to get typed Python objects from the model.

The AI SDK provides generateObject for the same purpose. You provide a Zod schema, and the SDK handles sending the schema to the model, validating the response, and returning a typed object.

// pnpm add ai @ai-sdk/openai zod
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const Person = z.object({
  first: z.string(),
  last: z.string(),
  birth_year: z.number(),
});

const { object } = await generateObject({
  model: openai('gpt-4o-mini'),
  schema: Person,
  prompt: 'Extract data for Ada Lovelace.',
});

3. Pattern: Effective Prompt Caching

Claudette's documentation highlights how to cache large, repeated prompt sections to save on costs.

In the AI SDK, you can achieve this using providerOptions.anthropic.cacheControl. This marks parts of a message as cacheable. Remember that Anthropic enforces minimum token thresholds, so this is most effective for large system prompts or RAG context. You can verify caching was successful by checking the providerMetadata.

// pnpm add ai @ai-sdk/anthropic
import { generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

const result = await generateText({
  model: anthropic('claude-sonnet-4-20250514'),
  messages: [
    {
      role: 'system',
      content: 'Long, reusable instructions...',
      providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' } } },
    },
    { role: 'user', content: 'User-specific question...' },
  ],
});

console.log(result.providerMetadata?.anthropic?.cacheCreationInputTokens);

4. Pattern: Using Anthropic's Server Tools

The AI SDK also provides access to Anthropic's server-side tools, like Text Editor and Web Search, which are explained in the Claudette notebooks.

Implementing the Text Editor

The Text Editor tool requires careful sandboxing. Your execute function is the safety boundary and must validate all paths and commands.

// app/api/edit/route.ts
// pnpm add ai @ai-sdk/anthropic
import { NextRequest } from 'next/server';
import { generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
import path from 'node:path';

const ROOT = path.resolve(process.cwd(), 'repo');
const safe = (p: string) => {
  const abs = path.resolve(ROOT, p);
  if (!abs.startsWith(ROOT)) throw new Error('Path outside allowed root');
  return abs;
};

const textEditor = anthropic.tools.textEditor_20250429({
  execute: async ({ command, path: p, ...args }) => {
    const abs = safe(p);
    // ... safe implementation for 'create', 'view', 'str_replace' ...
    return 'unsupported command';
  },
});

export async function POST(req: NextRequest) {
  const { prompt } = await req.json();
  const result = await generateText({
    model: anthropic('claude-4-sonnet-20250514'),
    tools: { str_replace_based_edit_tool: textEditor },
    prompt,
  });
  return new Response(result.text);
}

Implementing Web Search

To use Web Search, enable it in your Anthropic Console and then attach the provider-defined tool in your code.

import { anthropic } from '@ai-sdk/anthropic';
import { generateText } from 'ai';

const webSearch = anthropic.tools.webSearch_20250305({ maxUses: 3 });

const result = await generateText({
  model: anthropic('claude-4.1-opus-20250805'),
  prompt: 'Summarise the latest TypeScript release notes.',
  tools: { web_search: webSearch },
});