aiwritingpipelinecontentworkersautomation

How to Build an AI Writing Assistant Using a Chain of Workers

Combine research, summarization, and generation workers in a pipeline to produce high-quality written content from a single topic prompt.

S
Seek API Team
·

Large language models are great at generating text. They’re less great at generating accurate, current, well-researched text — because they don’t know what happened last week, and they hallucinate with confident fluency.

The pattern that fixes this: chain a retrieval step before the generation step. Pull real data, feed it to the model as context, generate grounded output.

This is exactly what a worker pipeline enables — without you managing distributed services.

The architecture

Topic prompt

[Search Worker] → finds relevant articles, sources, data

[Content Extractor Worker] → scrapes full text from those URLs

[Summarization Worker] → condenses each source into key points

[LLM Worker] → generates the article with source context injected

Final written output

Four workers chained sequentially. Each receives the output of the previous step as its input.

Step 1: Research

Start with a Google search worker to find the top sources for your topic:

POST /v1/workers/google-search/jobs
{
  "query": "latest trends in creator economy 2026",
  "limit": 8,
  "filter": "news"
}

Returns 8 URLs with titles, snippets, and publish dates. These become the sources for your article.

Step 2: Extract full content

Feed those URLs to a content extractor to get the full article text:

const sources = searchResult.results; // [{url, title, snippet}]

// Submit extraction jobs for all 8 URLs in parallel
const extractionJobs = await Promise.all(
  sources.map(source =>
    submitJob('webpage-extractor', { url: source.url, extractMainContent: true })
  )
);

const extractedTexts = await Promise.all(
  extractionJobs.map(j => waitForJob(j.job_uuid))
);

Step 3: Summarize each source

Full article text is often too long to fit into an LLM’s context window. Summarize each one to ~200 words:

const summaryJobs = await Promise.all(
  extractedTexts.map(text =>
    submitJob('summarizer', {
      text: text.content,
      targetWords: 200,
      style: 'bullet_points'
    })
  )
);

const summaries = await Promise.all(summaryJobs.map(j => waitForJob(j.job_uuid)));

Step 4: Generate the article

Compose the prompt and run the LLM worker:

const context = summaries.map((s, i) => (
  `Source ${i + 1}: ${sources[i].title}\n${s.summary}`
)).join('\n\n');

const articleJob = await submitJob('llm-generate', {
  model: 'gpt-4o',
  prompt: `You are a tech writer for a developer-focused publication.
Write a 1,200-word article titled: "${TOPIC}"

Use the following research sources to ground your writing. Include specific
facts, numbers, and examples from the sources. Do not hallucinate statistics.

--- RESEARCH ---
${context}
--- END RESEARCH ---

Article:`,
  maxTokens: 1800,
  temperature: 0.7
});

const article = await waitForJob(articleJob.job_uuid);
console.log(article.text);

Full pipeline implementation

const SEEKAPI_KEY = process.env.SEEKAPI_KEY;
const TOPIC = "How the creator economy is changing SaaS marketing in 2026";

async function generateResearchedArticle(topic) {
  console.log(`Researching: "${topic}"...`);
  
  // Step 1: Search
  const searchJob = await submitJob('google-search', {
    query: topic, limit: 6, filter: 'news'
  });
  const search = await waitForJob(searchJob.job_uuid);
  
  // Step 2: Extract in parallel
  const extractJobs = await Promise.all(
    search.results.map(r => submitJob('webpage-extractor', { url: r.url }))
  );
  const extracted = await Promise.all(extractJobs.map(j => waitForJob(j.job_uuid)));
  
  // Step 3: Summarize in parallel
  const summaryJobs = await Promise.all(
    extracted.map(e => submitJob('summarizer', { text: e.content, targetWords: 150 }))
  );
  const summaries = await Promise.all(summaryJobs.map(j => waitForJob(j.job_uuid)));

  // Step 4: Generate
  const context = summaries.map((s, i) => `[${search.results[i].title}]\n${s.summary}`).join('\n\n');
  const genJob = await submitJob('llm-generate', {
    model: 'gpt-4o',
    prompt: buildPrompt(topic, context),
    maxTokens: 2000
  });
  const generated = await waitForJob(genJob.job_uuid);

  return {
    topic,
    sources: search.results.map(r => r.url),
    article: generated.text
  };
}

const result = await generateResearchedArticle(TOPIC);

Total runtime: ~25–40 seconds. Total cost: ~$0.05–0.08.

Extending the pipeline

Add a fact-check step: After generation, run a worker that cross-references each statistic against the source documents and flags suspicious claims.

Add SEO optimization: Pass the generated article to an SEO worker that adds keyword density suggestions, recommends headings, and proposes internal link opportunities.

Add multi-language output: Submit the article to a translation worker for 3–5 additional language versions.

Add a publish step: Use a CMS API worker to publish directly to WordPress, Ghost, or Webflow without leaving the pipeline.

When this is better than ChatGPT directly

Direct LLMWorker Pipeline
Knowledge cutoffStatic training dataLive research
SourcesHallucinatedVerified URLs
ControlLimitedFull (each step inspectable)
AutomatableRequires prompt UIFully API-driven
ScalableRate limitedParallel

If you’re generating one article per week manually, ChatGPT is fine. If you’re generating 100 articles per month programmatically, worker pipelines are the better choice.