Headless CMS + AI: The New Stack for Dynamic Content Experiences
Headless CMS + AI: The New Stack for Dynamic Content Experiences
Static content is dead. Users expect personalized experiences that adapt to their interests, behavior, and context. But traditional CMS architectures weren't built for AI-driven dynamic content.
Enter the modern stack: Directus headless CMS combined with Retrieval-Augmented Generation (RAG) and real-time personalization. This article covers the architecture, implementation patterns, and performance strategies that make intelligent content platforms possible.
The Problem with Traditional CMS
Traditional CMS couples content with presentation. When you need:
- Personalized content: "Show articles based on user's industry"
- Dynamic assembly: "Build a landing page from relevant components"
- AI-powered recommendations: "Recommend similar content"
You're fighting the system. Monolithic CMS architectures weren't designed for AI-driven experiences.
Headless CMS Benefits
Directus provides:
- Content as data: Structured, API-first access
- Flexible relationships: Link content in meaningful ways
- Access control: Granular permissions per role
- Real-time updates: Webhooks and subscriptions
Combined with AI, this enables true dynamic content.
Architecture Overview
┌─────────────────────────────────────────────────────────┐
│ Content Layer │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────┐ │
│ │ Articles │ │ Products │ │ FAQs │ │ Pages │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ └───┬────┘ │
│ │ │ │ │ │
│ └─────────────┴─────────────┴────────────┘ │
│ Directus API │
└────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Intelligence Layer │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Vector │ │ Semantic │ │ Personal-│ │
│ │ Store │ │ Search │ │ ization │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
│ └─────────────┴─────────────┘ │
│ RAG Pipeline │
└────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Delivery Layer │
│ Nuxt.js / Next.js / Vue Frontend │
└─────────────────────────────────────────────────────────┘
Setting Up Directus for AI
Content Modeling
Structure content with AI in mind:
// Directus collection schema for AI-enhanced articles
{
"collection": "articles",
"fields": [
{ "field": "id", "type": "uuid" },
{ "field": "title", "type": "string" },
{ "field": "content", "type": "text" },
{ "field": "summary", "type": "text" },
// AI-generated fields
{ "field": "embedding", "type": "json", "note": "Vector embedding" },
{ "field": "topics", "type": "json", "note": "AI-extracted topics" },
{ "field": "read_time", "type": "integer" },
{ "field": "complexity_score", "type": "float" },
// Relationships
{ "field": "industries", "type": "m2m", "related": "industries" },
{ "field": "products", "type": "m2m", "related": "products" },
// Personalization
{ "field": "target_audience", "type": "json" },
{ "field": "content_variants", "type": "json" }
]
}
Vector Storage
Store embeddings for semantic search:
// When article is created/updated
async function generateEmbedding(article) {
const text = `${article.title}\n\n${article.content}`;
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
input: text
});
await directus.items('articles').updateOne(article.id, {
embedding: embedding.data[0].embedding
});
}
// Directus hook
export default defineHook(({ filter }) => {
filter('articles.items.create', async (payload) => {
if (payload.content) {
payload.embedding = await generateEmbedding(payload);
}
});
});
RAG: Retrieval-Augmented Generation
The RAG Pipeline
User Query → Retrieve Relevant Content → Generate Response
Implementation
// RAG service
class ContentRAG {
constructor(
private directus: DirectusClient,
private vectorStore: VectorStore,
private llm: LLMClient
) {}
async answerQuestion(userQuery: string, context?: UserContext) {
// Step 1: Retrieve relevant content
const queryEmbedding = await this.llm.embed(userQuery);
const relevantContent = await this.vectorStore.similaritySearch({
vector: queryEmbedding,
k: 5,
filter: context ? this.buildFilter(context) : undefined
});
// Step 2: Generate answer with context
const response = await this.llm.chat({
messages: [
{
role: "system",
content: `You're a helpful assistant. Answer based on the provided content.
Available content:
${relevantContent.map(c => `- ${c.title}: ${c.summary}`).join('\n')}`
},
{
role: "user",
content: userQuery
}
]
});
return {
answer: response.content,
sources: relevantContent,
confidence: this.calculateConfidence(response, relevantContent)
};
}
private buildFilter(context: UserContext) {
// Filter content based on user context
const filters = [];
if (context.industry) {
filters.push({ industries: { _contains: context.industry } });
}
if (context.subscriptionTier === 'basic') {
filters.push({ complexity_score: { _lt: 0.5 } });
}
return filters.length > 0 ? { _and: filters } : undefined;
}
}
Vector Search in Directus
// Custom endpoint for semantic search
export default defineEndpoint((router) => {
router.post('/search', async (req, res) => {
const { query, filters } = req.body;
// Generate query embedding
const embedding = await generateEmbedding(query);
// Find similar content using pgvector
const results = await req.database.raw(`
SELECT id, title, content, 1 - (embedding <=> ?) AS similarity
FROM articles
WHERE 1 - (embedding <=> ?) > 0.7
ORDER BY similarity DESC
LIMIT 10
`, [embedding, embedding]);
res.json(results.rows);
});
});
Real-Time Personalization
User Behavior Tracking
// Track user interactions for personalization
interface UserBehavior {
userId: string;
sessionId: string;
events: {
type: 'view' | 'click' | 'scroll' | 'search';
contentId?: string;
metadata: Record<string, any>;
timestamp: Date;
}[];
}
// Store in Directus
async function trackEvent(event: UserBehaviorEvent) {
await directus.items('user_behavior').createOne({
user: event.userId,
session: event.sessionId,
event_type: event.type,
content: event.contentId,
metadata: event.metadata,
timestamp: new Date()
});
}
// Aggregate for user profile
async function buildUserProfile(userId: string) {
const behaviors = await directus.items('user_behavior')
.readByQuery({
filter: { user: { _eq: userId } },
sort: ['-timestamp'],
limit: 100
});
const interests = extractInterests(behaviors.data);
const expertise = calculateExpertise(behaviors.data);
return {
interests,
expertise,
contentPreferences: inferPreferences(behaviors.data)
};
}
Dynamic Content Assembly
// Assemble personalized content
async function getPersonalizedContent(userId: string, pageType: string) {
const profile = await buildUserProfile(userId);
// Get base page structure
const page = await directus.items('dynamic_pages')
.readOne(pageType);
// Select components based on profile
const components = await Promise.all(
page.components.map(async (component) => {
if (component.type === 'article-grid') {
return {
...component,
articles: await selectArticles(profile, component.count)
};
}
if (component.type === 'faq-section') {
return {
...component,
faqs: await selectFAQs(profile, component.count)
};
}
return component;
})
);
// Generate personalized copy
const headline = await generateHeadline(page.baseHeadline, profile);
return {
...page,
components,
headline
};
}
async function selectArticles(profile: UserProfile, count: number) {
// Semantic search for articles matching user interests
const articles = await directus.items('articles')
.readByQuery({
filter: {
topics: { _intersects: profile.interests },
complexity_score: { _lte: profile.expertise }
},
sort: ['-views'],
limit: count
});
return articles.data;
}
Performance Optimization
Caching Strategy
// Multi-layer caching
const cache = {
// L1: In-memory (hot content)
memory: new Map(),
// L2: Redis (personalized content)
redis: new Redis(),
// L3: CDN (static assets)
cdn: new CDN()
};
async function getCachedContent(key: string, userId?: string) {
const cacheKey = userId ? `${key}:user:${userId}` : key;
// Check memory
if (cache.memory.has(cacheKey)) {
return cache.memory.get(cacheKey);
}
// Check Redis
const cached = await cache.redis.get(cacheKey);
if (cached) {
cache.memory.set(cacheKey, cached);
return cached;
}
// Generate and cache
const content = await generateContent(key, userId);
await cache.redis.setex(cacheKey, 300, JSON.stringify(content));
return content;
}
Edge Caching for Personalization
// Serve personalized content from edge
export default defineMiddleware(async (req, res, next) => {
const userId = req.headers['x-user-id'];
// Skip for logged-out users (cached at CDN)
if (!userId) {
res.setHeader('Cache-Control', 'public, max-age=3600');
return next();
}
// Serve cached personalized content
const cached = await edgeCache.get(`personalized:${userId}:${req.url}`);
if (cached) {
return res.json(cached);
}
// Generate and cache
const content = await generatePersonalizedContent(userId, req.url);
await edgeCache.set(`personalized:${userId}:${req.url}`, content, 60);
res.json(content);
});
Database Optimization
-- Index for vector similarity search
CREATE INDEX idx_articles_embedding ON articles
USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);
-- Index for personalization queries
CREATE INDEX idx_user_behavior_user_timestamp ON user_behavior(user_id, timestamp DESC);
-- Partition for large tables
CREATE TABLE user_behavior_2026 PARTITION OF user_behavior
FOR VALUES FROM ('2026-01-01') TO ('2027-01-01');
Content Generation Workflows
AI-Assisted Content Creation
// Generate content variants
async function generateVariants(articleId: string) {
const article = await directus.items('articles').readOne(articleId);
// Generate for different audiences
const variants = [
{
audience: 'beginners',
content: await simplifyContent(article.content, 'beginner')
},
{
audience: 'technical',
content: await expandTechnical(article.content)
},
{
audience: 'executives',
content: await businessSummary(article.content)
}
];
// Store variants
await directus.items('articles').updateOne(articleId, {
content_variants: variants
});
return variants;
}
// Auto-generate supporting content
async function enrichContent(articleId: string) {
const article = await directus.items('articles').readOne(articleId);
const enrichment = await Promise.all([
// Generate summary
generateSummary(article.content),
// Extract key points
extractKeyPoints(article.content),
// Suggest related articles
findRelated(article.id, article.topics),
// Generate meta description
generateMetaDescription(article.content)
]);
await directus.items('articles').updateOne(articleId, {
summary: enrichment[0],
key_points: enrichment[1],
related_articles: enrichment[2],
meta_description: enrichment[3]
});
}
Monitoring and Analytics
Personalization Effectiveness
// Track personalization performance
async function trackRecommendation(event: RecommendationEvent) {
await directus.items('recommendation_analytics').createOne({
user: event.userId,
content: event.contentId,
recommendation_source: event.source,
clicked: event.clicked,
time_spent: event.timeSpent,
converted: event.converted,
timestamp: new Date()
});
}
// Calculate metrics
async function getRecommendationMetrics() {
const results = await directus.database.raw(`
SELECT
recommendation_source,
COUNT(*) as total,
SUM(CASE WHEN clicked THEN 1 ELSE 0 END) as clicks,
SUM(CASE WHEN converted THEN 1 ELSE 0 END) as conversions,
AVG(time_spent) as avg_time
FROM recommendation_analytics
WHERE timestamp > NOW() - INTERVAL '30 days'
GROUP BY recommendation_source
`);
return results.rows;
}
Security Considerations
// Sanitize AI-generated content
async function sanitizeContent(content: string) {
// Remove potential XSS
const clean = content
.replace(/<script[^>]*>.*?<\/script>/gi, '')
.replace(/javascript:/gi, '')
.replace(/on\w+\s*=/gi, '');
// Check for policy violations
const moderation = await openai.moderations.create({ input: clean });
if (moderation.results[0].flagged) {
throw new Error('Content violates policy');
}
return clean;
}
// Rate limiting for AI generation
const rateLimiter = new RateLimiter({
windowMs: 60000,
maxRequests: 10,
keyGenerator: (req) => req.user?.id || req.ip
});
Conclusion
The combination of Directus headless CMS and AI unlocks truly dynamic content experiences:
- Content as structured data: Query, filter, and assemble programmatically
- Semantic search: Find content by meaning, not just keywords
- Real-time personalization: Adapt to user behavior and context
- AI-assisted creation: Generate and optimize content at scale
Start with a clear content model, implement RAG for your knowledge base, add personalization based on user behavior, and optimize for performance with strategic caching.
The future of content management isn't static pages — it's intelligent, adaptive experiences that feel personal to every visitor.
Ready to build your intelligent content platform? Contact Tropical Media at tropical-media.work.
From Workflows to Agents: When to Upgrade Your n8n Automations
Learn when to stick with traditional n8n workflows versus upgrading to AI agents — with Anthropic's Building Effective Agents framework, decision criteria, and practical migration strategies for business automation.
Getting Started with GPT-5.4: What Businesses Need to Know
GPT-5.4 brings significant improvements in reasoning, code generation, and multimodal capabilities. Learn what these updates mean for your business and how to leverage them for automation, content creation, and customer service.