
🌟 What This Workflow Does
This n8n workflow automates the entire content research process:
Pulls search data from your Google Search Console
Scrapes Google results for "People Also Ask" questions
Analyzes questions using OpenAI GPT to identify patterns
Clusters insights into coherent content topics
Saves to Notion for your content team
Result: In minutes, transform raw search data into a prioritized content roadmap that your team can start creating from immediately.
📊 Technical Architecture
Workflow Flow
Schedule Trigger (Weekly)
↓
Google Search Console Node (Fetch site queries, impressions, clicks, rankings)
↓
normalizeSearchConsole Function (Filter by volume, dedup, quality checks)
↓
SearchAPI Integration (Scrape live Google results)
↓
extractPAA Function (Parse FAQ questions from results)
↓
OpenAI GPT-4 Analysis (Cluster questions → topics)
↓
formatForNotion Function (Structure data for database)
↓
Notion Database Write (Save content ideas)
Core Components
Data Ingestion Layer
Google Search Console: Real keyword performance metrics
SearchAPI: Live SERP scraping with People Also Ask data
Built-in deduplication and filtering
Processing Layer
JavaScript function nodes for data transformation
Configurable filtering thresholds (impressions, position)
Fallback mechanisms for incomplete data
AI Analysis Layer
OpenAI GPT-4 for semantic clustering
Temperature: 0 (deterministic, consistent results)
Structured JSON output validation
Question parsing from multiple SERP elements
Output Layer
Notion database integration
Structured properties for filtering and sorting
Ready-to-use content cluster format
🔌 Integrations Required
1. Google Search Console (Required)
Connection: OAuth 2.0
Scope: Read-only access to search performance data
Data: Queries, impressions, clicks, CTR, ranking position
2. SearchAPI.io (Required)
Connection: API Key authentication
Purpose: Scrape Google search results and "People Also Ask" sections
Features: Location/language targeting, pagination
Pricing: Free tier available (100 searches/month), paid plans for volume
3. OpenAI API (Required)
Model: GPT-4 (or gpt-3.5-turbo for cost savings)
Purpose: Semantic clustering of questions into content topics
Settings: Temperature 0 for deterministic output
Pricing: ~$0.03-0.10 per workflow execution
4. Notion (Required)
Connection: OAuth 2.0 or API token
Purpose: Store and organize content ideas
Features: Database page creation with properties
Setup: notion.so → Create integration → Share database with bot
5. n8n (Required)
Version: Self-hosted or cloud.n8n.io
Requirements: Node.js 16+ (for self-hosted)
Storage: Minimal (executions stored in database)
Scaling: Runs scheduled 1x/week, no concurrent load
💡 Use Cases
1. Content Gap Analysis 🎯
Scenario: Your SEO team wants to know what content to create next.
How it works:
Workflow pulls all queries ranking #3-10 (good keyword difficulty)
Extracts "People Also Ask" questions for those topics
Groups into content clusters by topic
Notion shows which topics have the most questions
Result: Content brief with 15+ article ideas, ranked by search demand
Perfect for: Blog strategy, pillar content planning, niche expansion
2. FAQ Page Generation 📝
Scenario: You need to create a comprehensive FAQ page for your product.
How it works:
Search your brand name + product terms
Capture all "People Also Ask" questions
AI groups similar questions together
Export structured FAQs with answers
Result: Organized FAQ outline ready for content writers
Perfect for: Support pages, product pages, category pages
3. Long-Form Content Outlining 📄
Scenario: Writing a 3,000-word guide on a topic your audience searches for.
How it works:
Pull questions for your target keyword
AI clusters them into natural sections (Introduction, Advanced, Troubleshooting, etc.)
Each cluster becomes an H2 heading with subsections
Questions suggest internal linking opportunities
Result: Battle-tested content outline based on real search demand
Perfect for: Pillar pages, ultimate guides, how-to articles
4. Competitor Content Intelligence 🔍
Scenario: Analyze competitor domains to find content gaps.
How it works:
Run the workflow for competitor keywords
See what questions they're ranking for
Identify questions they DON'T answer (opportunity gap)
Create content around those gaps
Result: Competitive advantage with untapped content opportunities
Perfect for: Competitive positioning, market analysis, niche domination
5. Product FAQ & Support Optimization 🛠️
Scenario: Reduce support tickets by creating answer content.
How it works:
Search for your product category questions
Cluster by support ticket category
Create knowledge base articles for top clusters
Link from product pages to reduce support load
Result: Better UX, fewer support tickets, improved SEO
Perfect for: SaaS products, e-commerce, service providers
6. SEO Strategy Development 📈
Scenario: Build a data-driven content strategy for the quarter.
How it works:
Weekly workflow runs automatically
Accumulate content ideas in Notion over time
Filter by search volume, intent, difficulty
Identify content pillars and supporting articles
Plan quarterly roadmap
Result: Quarter-long content calendar based on real search data
Perfect for: Marketing agencies, in-house teams, content platforms
🗄️ Output Structure
Each item saved to Notion contains:
Field | Type | Example |
|---|---|---|
Name (title) | Text | "Starter & Feeding" |
Keyword | Text | "how to make sourdough" |
Description | Rich Text | "Guide to creating and maintaining a healthy starter" |
Intent | Text | "informational" / "transactional" / "navigational" |
Priority | Number | 1-5 (5 = highest) |
Questions | Rich Text | "What is sourdough starter?..." |
Article Title | Text | "Sourdough Starter: Create & Maintain" |
🚀 Setup Requirements
Prerequisites (You Need These)
Google Search Console Access
Must be verified property owner
At least 90 days of historical data
Not required: API key (OAuth handles authentication)
SearchAPI.io Account
Free tier: 100 searches/month
Paid tier: $19+ for 10,000 searches/month
Create account: searchapi.io
Get API key from dashboard
OpenAI API Account
Requires paid account (minimum $5 credit)
API key from: platform.openai.com
Estimated cost: $0.03-0.10 per workflow run
Notion Workspace
Create blank database (or duplicate template)
Create internal integration/bot
Generate API token or set up OAuth
n8n Instance
Cloud: Sign up at cloud.n8n.io
Self-hosted: Docker or Node.js installation
Either option works; cloud is simpler for beginners
n8n Setup Steps
Step 1: Create Credentials
In n8n UI:
Credentials → New Credential → Select Service
✓ Google Search Console OAuth2
✓ OpenAI API
✓ SearchAPI
✓ Notion
Step 2: Import Workflow
Import → From File → Select JSON
Choose: PAA_Scraper_Content_Ideas_Generator_SANITIZED.jsonStep 3: Configure Nodes
Each node with a ⚙️ icon needs configuration:
Query search analytics:
- Site URL: https://yourwebsite.com
- Page path: /blog/ (or / for entire site)
- Attach: Google Search Console credential
Search google:
- Attach: SearchAPI credential
- Location: Set to your target country
- Language: Set to your target language
Message a model:
- Model: gpt-4 (recommended) or gpt-3.5-turbo
- Temperature: 0 (for consistency)
- Attach: OpenAI credential
Create a database page:
- Database ID: From your Notion URL
- Attach: Notion credentialStep 4: Test
Click "Execute Workflow" button
Check execution logs for errors
Verify data appears in Notion
Once working, click "Activate" to scheduleConfiguration Parameters
Data Filtering (in normalizeSearchConsole node)
MIN_IMPRESSIONS = 25 // Only queries with 25+ monthly searches
MAX_POSITION = 8 // Only queries ranking in top 8
DEDUPE = true // Remove duplicate keywordsAdjust these for your needs:
Blog content: MIN_IMPRESSIONS = 50
FAQ content: MIN_IMPRESSIONS = 10
Competitive analysis: MIN_IMPRESSIONS = 5
Schedule Settings
Default: Every Monday at 9 AM
Change in Schedule Trigger node:
cron: "0 9 * * 1" // Monday 9 AM
Other options:
"0 0 * * *" // Daily at midnight
"0 9 * * 1-5" // Weekdays at 9 AM
"0 */6 * * *" // Every 6 hours🎨 Customization Options
1. Customize the Clustering Prompt
In the "Message a model" node, edit the system prompt:
Default: Clusters questions into 3-5 topic groups
Custom Examples:
For Blog Content (detailed outlines):
"Task: Create detailed content outlines with H2 and H3 sections..."
// Adds: suggested_outline_structure, word_count_estimateFor FAQ (short answers):
"Task: Group questions for concise FAQ answers..."
// Adds: answer_length, difficulty_levelFor Product Documentation:
"Task: Organize questions by product feature..."
// Adds: feature_category, use_case, priority_for_docs3. Filter by Search Intent
Add filtering in the AI analysis to focus on specific intent types:
// In formatForNotion node, add filter:
const intentsToKeep = ["informational", "navigational"];
out = out.filter(item => intentsToKeep.includes(item.intent));Intent Types:
Informational: "How to", "What is", "Why" (70% of searches)
Transactional: "Buy", "Price", "Reviews" (15% of searches)
Navigational: "Brand + feature", "Site search" (15% of searches)
4. Add Competitor Domain Analysis
Instead of your own GSC, analyze competitor keywords:
// In normalizeSearchConsole, replace with:
const competitorKeywords = [
"how to make sourdough",
"sourdough starter maintenance",
"sourdough troubleshooting"
];
// Then search for these keywords via SearchAPI5. Multi-Language Support
In the "Search google" node:
// Current: Bulgarian (bg)
"languageSettings": { "hl": "bg" },
"locationSettings": { "gl": "bg" }
// Change to your target language:
"hl": "en", // English
"gl": "us", // United States
// Other options:
"hl": "es", "gl": "es" // Spanish
"hl": "de", "gl": "de" // German
"hl": "fr", "gl": "fr" // French6. Custom Notion Properties
Edit the "Create a database page" node to save additional fields:
Add Trend Data:
"trend_direction": "{{ ascending | descending }}",
"recent_momentum": "{{ positive | neutral | negative }}"Add Competition Data:
"competitor_coverage": "high | medium | low",
"gap_opportunity": "{{ percentage }}"Add Creation Metadata:
"content_type": "blog | faq | guide | tutorial",
"estimated_length": "{{ word_count }}",
"required_visuals": "{{ image_count }}"7. Extend with Slack Notifications
Add a Slack node after Notion write to notify your team:
New node: Slack → Send Message
Message: "📝 {{ $json.topic_title }} - Priority {{ $json.priority }}"
Channel: #content-ideas
Frequency: Only send when priority >= 48. Add Email Report Generation
Instead of just saving to Notion, email a weekly summary:
New node: Gmail → Send Email
To: [email protected]
Subject: "Weekly Content Ideas - {{ items_count }} new topics"
Body: "Priority topics this week: [list]"
Include: HTML table with top 10 ideas🏆 Success Tips
✅ Start small: Run on single page/topic first, expand later
✅ Monitor costs: Check SearchAPI and OpenAI usage monthly
✅ Iterate prompts: If clustering isn't right, tweak the AI prompt
✅ Create workflow: Team reviews Notion → writers create → track progress
✅ Measure results: Track which content ideas get best engagement
✅ Optimize schedule: Run when your team reviews (e.g., Monday morning)
✅ Archive old ideas: Keep Notion clean by archiving completed tasks
Full JSON Workflow:
{
"name": "People Also Ask (PAA) Scraper + Content Ideas Generator",
"nodes": [
{
"parameters": {
"rule": {
"interval": [
{}
]
}
},
"name": "Schedule Trigger",
"type": "n8n-nodes-base.scheduleTrigger",
"typeVersion": 1,
"position": [
208,
-112
],
"id": "10d286d2-d473-4fb6-9be2-857e92f99890"
},
{
"parameters": {
"functionCode": "// normalizeSearchConsole - robust + dedupe + configurable filters\n// Edit the threshold values below or set them as environment variables in n8n\nconst MIN_IMPRESSIONS = (process.env.MIN_IMPRESSIONS !== undefined) ? Number(process.env.MIN_IMPRESSIONS) : 25;\nconst MAX_POSITION = (process.env.MAX_POSITION !== undefined) ? Number(process.env.MAX_POSITION) : 8; // lower is better (1 is top)\nconst DEDUPE = true; // set to false if you want duplicates preserved\n\n// Gather incoming jsons\nconst inputJsons = items.map(i => i.json);\nlet rows = [];\n\nif (inputJsons.length === 1 && Array.isArray(inputJsons[0].rows)) {\n // shape: { rows: [ { keys: [...], clicks, impressions, ctr, position }, ... ] }\n rows = inputJsons[0].rows.map(r => ({\n query: Array.isArray(r.keys) ? r.keys[0] : r.keys,\n clicks: r.clicks ?? 0,\n impressions: r.impressions ?? 0,\n ctr: r.ctr ?? 0,\n position: r.position ?? null\n }));\n} else {\n // shape: [ { query, clicks, impressions, ctr, position }, ... ] OR similar flattened items\n rows = inputJsons.map(r => ({\n query: r.query ?? (Array.isArray(r.keys) ? r.keys[0] : r.keys) ?? null,\n clicks: r.clicks ?? 0,\n impressions: r.impressions ?? 0,\n ctr: r.ctr ?? 0,\n position: r.position ?? null\n }));\n}\n\n// Normalize strings and filter out empty queries\nrows = rows.map(r => ({\n query: typeof r.query === 'string' ? r.query.trim() : (r.query == null ? '' : String(r.query).trim()),\n clicks: Number(r.clicks) || 0,\n impressions: Number(r.impressions) || 0,\n ctr: Number(r.ctr) || 0,\n position: (r.position === null || r.position === undefined) ? null : Number(r.position)\n})).filter(r => r.query.length > 0);\n\n// Apply dedupe\nif (DEDUPE) {\n const seen = new Set();\n rows = rows.filter(r => {\n const q = r.query.toLowerCase();\n if (seen.has(q)) return false;\n seen.add(q);\n return true;\n });\n}\n\n// Apply metric filters: keep only rows with impressions >= MIN_IMPRESSIONS and position <= MAX_POSITION (if position exists)\nrows = rows.filter(r => {\n if (r.impressions < MIN_IMPRESSIONS) return false;\n if (r.position !== null && !Number.isNaN(r.position) && r.position > MAX_POSITION) return false;\n return true;\n});\n\n// Build output items\nconst out = rows.map(r => ({\n json: {\n query: r.query,\n clicks: r.clicks,\n impressions: r.impressions,\n ctr: r.ctr,\n position: r.position,\n source: 'search-console'\n }\n}));\n\nreturn out;\n"
},
"name": "normalizeSearchConsole",
"type": "n8n-nodes-base.function",
"typeVersion": 1,
"position": [
656,
-112
],
"id": "10cf3e74-3213-45d9-8cc9-d00182516837",
"alwaysOutputData": true
},
{
"parameters": {
"functionCode": "const input = items[0].json || {};\nconst out = {...input};\n\n// Determine position (try multiple places)\nlet position = input.position ?? input.search_information?.position ?? input.current ?? null;\nif ((position === undefined || position === null) && Array.isArray(input.organic_results) && input.organic_results.length > 0) {\n const first = input.organic_results[0];\n position = (typeof first.position === 'number') ? first.position : (first.position ? Number(first.position) : 1);\n}\nif (position === undefined) position = null;\n\n// Extract questions (same logic you already have)\nconst candidates = [];\nfunction pushIfQuestion(text) {\n if (!text || typeof text !== 'string') return;\n const t = text.trim();\n if (t.length < 5) return;\n if (t.includes('?') || /^(what|how|why|when|where|who|can|do|does|is|are|should|which|will|как|кога|какво|защо|къде|кой|може|дали)\\b/i.test(t)) {\n candidates.push(t);\n return;\n }\n if (t.length < 80 && /^[\\w\\s\\p{L}]{3,80}$/u.test(t) && /\\b(как|кога|какво|защо|къде|кой|може|дали)\\b/i.test(t)) {\n candidates.push(t);\n }\n}\n\nif (Array.isArray(input.related_searches)) {\n for (const r of input.related_searches) {\n if (typeof r === 'string') pushIfQuestion(r);\n else if (r && r.query) pushIfQuestion(r.query);\n else if (r && r.title) pushIfQuestion(r.title);\n }\n}\nif (input.answer_box) {\n if (typeof input.answer_box === 'object') {\n pushIfQuestion(input.answer_box.question || input.answer_box.title || input.answer_box.snippet);\n } else if (typeof input.answer_box === 'string') pushIfQuestion(input.answer_box);\n}\nif (input.ai_overview) {\n const ao = input.ai_overview;\n if (Array.isArray(ao)) {\n for (const a of ao) pushIfQuestion(typeof a === 'string' ? a : (a.title || a.snippet || a.text));\n } else pushIfQuestion(ao.title || ao.snippet || ao.text || ao);\n}\nif (Array.isArray(input.organic_results)) {\n for (const res of input.organic_results) {\n if (res.title) pushIfQuestion(res.title);\n if (res.snippet) pushIfQuestion(res.snippet);\n }\n}\n\nlet uniq = Array.from(new Set(candidates.map(s => s.trim())));\nif (uniq.length === 0 && Array.isArray(input.organic_results)) {\n for (const res of input.organic_results.slice(0, 10)) {\n if (res.title && res.title.length < 160) uniq.push(res.title.trim());\n if (uniq.length >= 10) break;\n }\n}\nconst questions = uniq.slice(0, 50);\nif (questions.length === 0 && (input.search_parameters?.q || input.q || input.query_displayed)) {\n const seed = input.search_parameters?.q || input.q || input.query_displayed;\n questions.push(`What does \"${seed}\" mean?`);\n}\n\nconst resultItem = {\n json: {\n ...out,\n position,\n questions\n }\n};\n\nreturn [ resultItem ];"
},
"name": "extractPAA",
"type": "n8n-nodes-base.function",
"typeVersion": 1,
"position": [
1104,
-176
],
"id": "270b3985-1724-4c85-ba45-fb392ce23573"
},
{
"parameters": {
"functionCode": "/* formatForNotion — robust JSON extraction + keyword fallback */\n\nfunction extractJSONFromText(text) {\n const firstBracket = Math.min(\n ...['[','{'].map(ch => {\n const idx = text.indexOf(ch);\n return idx >= 0 ? idx : Infinity;\n })\n );\n if (firstBracket === Infinity) throw new Error(\"No JSON bracket found in text. Preview:\\n\" + (text||'').slice(0,1000));\n const openChar = text[firstBracket];\n const closeChar = openChar === '[' ? ']' : '}';\n const lastBracket = text.lastIndexOf(closeChar);\n if (lastBracket <= firstBracket) throw new Error(\"No matching close bracket in text. Preview:\\n\" + (text||'').slice(0,1000));\n const candidate = text.slice(firstBracket, lastBracket + 1);\n return JSON.parse(candidate);\n}\n\nfunction getRawTextFromItem(itemJson) {\n if (!itemJson) return '';\n // openai-like shapes\n if (itemJson.choices && Array.isArray(itemJson.choices) && itemJson.choices[0]) {\n const ch = itemJson.choices[0];\n if (ch.message && ch.message.content) {\n if (typeof ch.message.content === 'string') return ch.message.content;\n if (Array.isArray(ch.message.content)) {\n for (const seg of ch.message.content) if (typeof seg === 'string') return seg;\n for (const seg of ch.message.content) if (seg && typeof seg.text === 'string') return seg.text;\n }\n }\n if (typeof ch.text === 'string') return ch.text;\n // fallback stringify\n try { return JSON.stringify(ch); } catch(e){}\n }\n // other shape: output[0].content[0].text\n if (itemJson.output && Array.isArray(itemJson.output) && itemJson.output[0]) {\n const o0 = itemJson.output[0];\n if (o0.content && Array.isArray(o0.content) && o0.content[0]) {\n const c0 = o0.content[0];\n if (typeof c0.text === 'string') return c0.text;\n if (typeof c0 === 'string') return c0;\n }\n if (typeof o0.content === 'string') return o0.content;\n }\n // fallback: if any property looks like JSON string, return it\n for (const k of Object.keys(itemJson)) {\n const v = itemJson[k];\n if (typeof v === 'string' && (v.trim().startsWith('[') || v.trim().startsWith('{'))) return v;\n }\n // last resort\n try { return JSON.stringify(itemJson); } catch(e){ return ''; }\n}\n\n// -------------- begin processing -------------------\nconst itemJson = items[0] && items[0].json ? items[0].json : {};\nconst rawText = getRawTextFromItem(itemJson);\n\n// parse model output\nlet parsed;\ntry {\n parsed = extractJSONFromText(String(rawText));\n} catch (err) {\n const preview = {\n availableKeys: Object.keys(itemJson || {}).slice(0,50),\n rawPreview: String(rawText).slice(0,1200)\n };\n throw new Error('Failed to extract JSON from model output: ' + err.message + '\\nPreview keys: ' + JSON.stringify(preview));\n}\n\n// normalize clusters\nlet clusters;\nif (Array.isArray(parsed)) clusters = parsed;\nelse if (parsed && Array.isArray(parsed.clusters)) clusters = parsed.clusters;\nelse if (parsed && typeof parsed === 'object') clusters = [parsed];\nelse throw new Error('Parsed JSON not array/object: ' + typeof parsed);\n\n// try to discover keyword from parsed or from other places\nlet parsedKeyword = null;\n// if model returned top-level keyword or clusters contain keyword\nif (parsed && typeof parsed === 'object') {\n if (parsed.keyword && typeof parsed.keyword === 'string') parsedKeyword = parsed.keyword;\n else if (Array.isArray(parsed.clusters) && parsed.clusters.length > 0 && parsed.clusters[0].keyword) parsedKeyword = parsed.clusters[0].keyword;\n}\nif (!parsedKeyword && Array.isArray(clusters) && clusters.length > 0 && clusters[0].keyword) parsedKeyword = clusters[0].keyword;\n\n// fallback: try to pull from the inbound item (if upstream merging preserved them)\nlet inboundKeyword = itemJson.query || itemJson.q || (itemJson.search_parameters && itemJson.search_parameters.q) || itemJson.query_displayed || itemJson.keyword || null;\n\n// final resolvedKeyword function\nfunction resolveKeyword(c) {\n // cluster-level keyword > parsedKeyword > inboundKeyword > null\n return (c && c.keyword) || parsedKeyword || inboundKeyword || null;\n}\n\n// build output for Notion\nconst out = [];\nfor (const c of clusters) {\n out.push({\n json: {\n keyword: resolveKeyword(c),\n topic_title: c.topic_title || c.topic || null,\n suggested_article_title: c.suggested_article_title || c.title || null,\n short_description: c.short_description || c.summary || null,\n intent: c.intent || 'informational',\n priority: Number.isFinite(Number(c.priority)) ? Number(c.priority) : 3,\n questions: Array.isArray(c.questions) ? c.questions.join('\\n') : String(c.questions || '')\n }\n });\n}\n\nreturn out;\n"
},
"name": "formatForNotion",
"type": "n8n-nodes-base.function",
"typeVersion": 1,
"position": [
1904,
-112
],
"id": "13c286bd-30a9-4ac1-8c52-abdc1e3ee53f"
},
{
"parameters": {},
"name": "NoOp End",
"type": "n8n-nodes-base.noOp",
"typeVersion": 1,
"position": [
2352,
-112
],
"id": "2654231a-cc0f-4ef7-b877-f2c7608b0668"
},
{
"parameters": {
"operation": "getPageInsights",
"siteUrl": "{{ $env.GSC_SITE_URL }}",
"dateRangeMode": "last3mo",
"rowLimit": 50,
"dimensions": [
"query"
],
"filters": {
"filter": [
{
"dimension": "page",
"expression": "{{ $env.GSC_PAGE_PATH }}"
}
]
}
},
"type": "n8n-nodes-google-search-console.googleSearchConsole",
"typeVersion": 1,
"position": [
432,
-112
],
"id": "14ce48f7-0a85-4a20-9264-fdc9c1fb229a",
"name": "Query search analytics",
"credentials": {
"googleSearchConsoleOAuth2Api": {
"id": "{{ $env.GSC_CREDENTIAL_ID }}",
"name": "Google Search Console account"
}
}
},
{
"parameters": {
"modelId": {
"__rl": true,
"value": "gpt-4",
"mode": "list",
"cachedResultName": "GPT-4"
},
"responses": {
"values": [
{
"content": "=Input: {\n \"keyword\": \"{{ $json.search_parameters.q }}\",\n \"questions\": {{ JSON.stringify($json.questions || []) }},\n \"metrics\": {\n \"clicks\": {{ $json.clicks || 0 }},\n \"impressions\": {{ $json.impressions || 0 }},\n \"ctr\": {{ $json.ctr || 0 }},\n \"position\": {{ $json.position === null ? \"null\" : $json.position }}\n }\n}\n\nTask: Return strictly valid JSON — a root-level ARRAY named \"clusters\" (i.e. return only: [ {...}, {...} ]). For every cluster object, INCLUDE the same keyword from the Input exactly as provided, using the key \"keyword\". Each cluster object must have:\n\n- \"keyword\": string (must match Input.keyword)\n- \"topic_title\": string\n- \"questions\": [string,...]\n- \"suggested_article_title\": string\n- \"short_description\": string (20–40 words)\n- \"intent\": one of [\"informational\",\"transactional\",\"navigational\"]\n- \"priority\": integer 1-5\n\nConstraints:\n1. Return **only** the JSON array — no prose, no markdown.\n2. The model must echo Input.keyword into every cluster as the \"keyword\" field.\n3. Keep output compact and valid JSON.\n\nExample cluster element:\n{\n \"keyword\":\"how to make sourdough\",\n \"topic_title\":\"Starter & feeding\",\n \"questions\":[\"What is sourdough starter?\",\"How often to feed starter?\"],\n \"suggested_article_title\":\"Sourdough Starter: Create & Maintain a Healthy Starter\",\n \"short_description\":\"A concise how-to guide to start and maintain a healthy sourdough starter for reliable loaves.\",\n \"intent\":\"informational\",\n \"priority\":4\n}\n"
},
{
"role": "system",
"content": "You are an assistant that clusters FAQ-style questions into coherent topic groups and returns valid JSON only. Do not emit any explanation, prose, or extra characters — only the JSON array requested. Be concise, factual, and consistent.\n"
}
]
},
"builtInTools": {},
"options": {
"temperature": 0
}
},
"type": "@n8n/n8n-nodes-langchain.openAi",
"typeVersion": 2,
"position": [
1328,
-176
],
"id": "ed351e67-e7b0-4ebe-a533-fc614ee11337",
"name": "Message a model",
"alwaysOutputData": false,
"credentials": {
"openAiApi": {
"id": "{{ $env.OPENAI_CREDENTIAL_ID }}",
"name": "OpenAi account"
}
}
},
{
"parameters": {
"resource": "databasePage",
"databaseId": {
"__rl": true,
"value": "{{ $env.NOTION_DATABASE_ID }}",
"mode": "list",
"cachedResultName": "New Topics",
"cachedResultUrl": "https://www.notion.so/NOTION_DATABASE_ID"
},
"title": "=New Topics",
"propertiesUi": {
"propertyValues": [
{
"key": "Keyword|rich_text",
"textContent": "={{ $json.keyword }}"
},
{
"key": "Name|title",
"title": "={{ $json.topic_title }}"
},
{
"key": "Description|rich_text",
"textContent": "={{ $json.short_description }}"
},
{
"key": "Intent|rich_text",
"textContent": "={{ $json.intent }}"
},
{
"key": "Priority|rich_text",
"textContent": "={{ $json.priority.toString() }}"
},
{
"key": "Questions|rich_text",
"textContent": "={{ $json.questions }}"
}
]
},
"options": {}
},
"type": "n8n-nodes-base.notion",
"typeVersion": 2.2,
"position": [
2128,
-112
],
"id": "10d06184-c37b-4aba-a0b4-fdfcab071c89",
"name": "Create a database page",
"credentials": {
"notionApi": {
"id": "{{ $env.NOTION_CREDENTIAL_ID }}",
"name": "Notion account"
}
}
},
{
"parameters": {
"q": "={{ $json.query }}",
"locationSettings": {
"gl": "bg"
},
"languageSettings": {
"hl": "bg"
},
"searchOptions": {},
"timeFilters": {},
"pagination": {
"num": "10"
},
"advancedOptions": {},
"requestOptions": {}
},
"type": "@searchapi/n8n-nodes-searchapi.searchApi",
"typeVersion": 1,
"position": [
880,
-112
],
"id": "fbf6db7d-d924-4812-8a7e-d504450f3d87",
"name": "Search google",
"credentials": {
"searchApi": {
"id": "{{ $env.SEARCHAPI_CREDENTIAL_ID }}",
"name": "SearchApi account"
}
}
},
{
"parameters": {},
"type": "n8n-nodes-base.merge",
"typeVersion": 3.2,
"position": [
1680,
-112
],
"id": "d30f9e5e-a649-4f43-85b5-b05113c72f20",
"name": "Merge"
}
],
"pinData": {},
"connections": {
"Schedule Trigger": {
"main": [
[
{
"node": "Query search analytics",
"type": "main",
"index": 0
}
]
]
},
"normalizeSearchConsole": {
"main": [
[
{
"node": "Search google",
"type": "main",
"index": 0
}
]
]
},
"formatForNotion": {
"main": [
[
{
"node": "Create a database page",
"type": "main",
"index": 0
}
]
]
},
"Query search analytics": {
"main": [
[
{
"node": "normalizeSearchConsole",
"type": "main",
"index": 0
}
]
]
},
"extractPAA": {
"main": [
[
{
"node": "Message a model",
"type": "main",
"index": 0
}
]
]
},
"Message a model": {
"main": [
[
{
"node": "Merge",
"type": "main",
"index": 0
}
]
]
},
"Create a database page": {
"main": [
[
{
"node": "NoOp End",
"type": "main",
"index": 0
}
]
]
},
"Search google": {
"main": [
[
{
"node": "extractPAA",
"type": "main",
"index": 0
},
{
"node": "Merge",
"type": "main",
"index": 1
}
]
]
},
"Merge": {
"main": [
[
{
"node": "formatForNotion",
"type": "main",
"index": 0
}
]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "1.0.0-sanitized",
"meta": {
"templateCredsSetupCompleted": false
},
"tags": [
{
"id": "seo",
"name": "SEO"
},
{
"id": "content-research",
"name": "Content Research"
},
{
"id": "ai-analysis",
"name": "AI Analysis"
}
]
}Your content research is about to get a lot faster. Let's go! 🚀
