Skip to main content
Combine Agentset search with an LLM to generate answers grounded in your documents. This pattern retrieves relevant context from your namespace and passes it to the model.

Basic RAG pattern

Search your namespace, format the results as context, and generate a response.
import { Agentset } from "agentset";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

const agentset = new Agentset({
  apiKey: process.env.AGENTSET_API_KEY,
});

const ns = agentset.namespace("YOUR_NAMESPACE_ID");

const query = "What are the key findings?";

// Search for relevant context
const results = await ns.search(query);
const context = results.map((r) => r.text).join("\n\n");

// Generate a response
const { text } = await generateText({
  model: openai("gpt-5.1"),
  system: `Answer questions based on the following context:\n\n${context}`,
  prompt: query,
});

console.log(text);

Filtering context

Narrow results to specific documents using metadata filters.
const results = await ns.search(query, {
  filter: {
    category: "technical",
    year: { $gte: 2024 },
  },
});

Next steps