Making Content Sites Agent-Navigable
Expose your articles, search, and content relationships as typed commands — so any AI agent can navigate your site like a pro.
Why This Matters
News sites, blogs, documentation portals, and wikis are among the most-visited sites by AI agents. Yet most content is locked behind complex HTML layouts, paywalls, and dynamic rendering that makes programmatic access unreliable.
Surf gives content publishers a clean, structured way to expose their content to AI agents while maintaining full control over access, rate limiting, and attribution.
Defining Content Commands
A typical content site needs four core commands: search, get article, get related content, and subscribe to updates.
import { createSurf } from '@surfjs/core' const surf = createSurf({ name: 'The Daily Surf', commands: { // Full-text search across all content search: { description: 'Search articles by keyword', params: { query: { type: 'string', required: true }, category: { type: 'string' }, dateFrom: { type: 'string', description: 'ISO date' }, dateTo: { type: 'string', description: 'ISO date' }, limit: { type: 'number', default: 10 }, }, rateLimit: { windowMs: 60_000, maxRequests: 30, keyBy: 'ip' }, hints: { idempotent: true, sideEffects: false }, run: async ({ query, category, dateFrom, dateTo, limit }) => { const articles = await cms.search({ query, category, dateFrom, dateTo, limit }) return { results: articles.map(a => ({ id: a.id, title: a.title, excerpt: a.excerpt, publishedAt: a.publishedAt, category: a.category, author: a.author.name, })), total: articles.length, } }, }, // Article namespace article: { get: { description: 'Get full article by ID', params: { id: { type: 'string', required: true }, format: { type: 'string', enum: ['html', 'markdown', 'text'], default: 'markdown' }, }, hints: { idempotent: true, sideEffects: false }, run: async ({ id, format }) => { const article = await cms.getArticle(id) return { id: article.id, title: article.title, content: article.getContent(format), author: article.author, publishedAt: article.publishedAt, tags: article.tags, wordCount: article.wordCount, } }, }, related: { description: 'Get articles related to a given article', params: { id: { type: 'string', required: true }, limit: { type: 'number', default: 5 }, }, hints: { idempotent: true, sideEffects: false }, run: async ({ id, limit }) => cms.getRelated(id, { limit }), }, }, // Subscribe to topic updates subscribe: { description: 'Subscribe to new articles on a topic', auth: 'required', params: { topic: { type: 'string', required: true }, webhookUrl: { type: 'string', required: true }, }, run: async ({ topic, webhookUrl }, ctx) => { const sub = await subscriptions.create({ userId: ctx.claims?.userId, topic, webhookUrl, }) return { subscriptionId: sub.id, status: 'active' } }, }, },})CMS Integration
Surf works with any CMS — WordPress, Sanity, Contentful, Strapi, or custom. The commands layer sits between your CMS and the agent, so you control what's exposed.
// Example: Sanity CMS adapterimport { createClient } from '@sanity/client' const sanity = createClient({ projectId: process.env.SANITY_PROJECT_ID, dataset: 'production', useCdn: true,}) export const cms = { async search({ query, category, limit }) { const filter = category ? `*[_type == "article" && category == $category && title match $query]` : `*[_type == "article" && title match $query]` return sanity.fetch(filter, { query: `*${query}*`, category }) .then(docs => docs.slice(0, limit)) }, async getArticle(id) { return sanity.fetch('*[_type == "article" && _id == $id][0]', { id }) }, async getRelated(id, { limit }) { const article = await this.getArticle(id) return sanity.fetch( '*[_type == "article" && _id != $id && count(tags[@ in $tags]) > 0][0...$limit]', { id, tags: article.tags, limit } ) }}Agent Experience
Once set up, an AI agent can navigate your content site with simple, typed calls:
import { SurfClient } from '@surfjs/client' const client = await SurfClient.discover('https://dailysurf.com') // Research a topicconst articles = await client.execute('search', { query: 'AI regulation Europe', dateFrom: '2025-01-01', limit: 5,}) // Read the top articleconst article = await client.execute('article.get', { id: articles.results[0].id, format: 'markdown',}) // Find related readingconst related = await client.execute('article.related', { id: article.id, limit: 3,}) // Subscribe to the topic (requires auth)const authedClient = await SurfClient.discover('https://dailysurf.com', { auth: 'sk-my-api-key',})await authedClient.execute('subscribe', { topic: 'AI regulation', webhookUrl: 'https://my-agent.com/webhooks/articles',})Access Control & Attribution
Use Surf's auth and rate limiting to control how agents access your content. You can require API keys for full article access while keeping search open:
// In your createSurf config:commands: { // Free tier: search only (rate limited) search: { description: 'Search articles', rateLimit: { windowMs: 60_000, maxRequests: 10, keyBy: 'ip' }, run: async (params) => { /* ... */ }, }, // Paid tier: full article access (auth required) article: { get: { auth: 'required', rateLimit: { windowMs: 60_000, maxRequests: 60, keyBy: 'auth' }, run: async (params, ctx) => { // Track attribution await analytics.track('agent_read', { userId: ctx.claims?.userId, articleId: params.id, }) return /* ... */ }, }, },}Summary
Adding Surf to a content site is straightforward: define search and content retrieval commands in createSurf(), connect to your CMS, and optionally add auth for premium access. Agents get structured, reliable access to your content, and you maintain full control over what's exposed and who can access it.
Ask your agent
Copy these prompts into Claude, OpenClaw, or any AI agent
Make my content site agent-navigable with Surf.js. Install @surfjs/core and use createSurf() to define commands: search (query: string, returns articles), article.get (id: string, returns full content), article.related (id: string). Mount on Express with surf.middleware(). The manifest should be at /.well-known/surf.json.
Add Surf to my Next.js blog. Create /app/api/surf/route.ts using createSurf() with commands for search, article.get, article.related. Use @surfjs/core and its middleware().
Using @surfjs/client, discover http://localhost:3000, call search with query "typescript", then fetch the full content of the first result using article.get. Summarise what you find.
💡 Works with OpenClaw, Claude Code, Cursor, Codex, and any agent that can make HTTP requests.