Most developers have a graveyard of useful scripts. A scraper that worked well for a specific project. A data transformation utility. A PDF parser. A web tech detector. Things that took hours to build, were immediately useful, and then just sat in a private repo doing nothing.
Those scripts are money left on the table.
The idea: workers as products
A worker is a function with a defined input schema, a deterministic output, and a price per run. When you publish a worker on Seek API, it becomes:
- Accessible via a clean REST API
- Callable by anyone with an API key
- Automatically billed per execution
- Revenue-generating without any action from you
You write the code once. Every time someone calls it, you earn 70% of the run fee you set. Seek API handles the infrastructure — execution, scaling, authentication, billing — and takes 30%.
What makes a good worker?
The best workers on the marketplace share common traits:
They solve a specific, recurring problem. “Validate this email” or “extract this LinkedIn profile” — not “do something with this data.”
They have fast, predictable run times. Under 5 seconds is ideal. Users want results, not uncertainty.
They return clean, typed JSON. No markdown. No prose. Structured data that goes directly into a pipeline.
They don’t require setup. The magic of a marketplace worker is zero setup for the caller. No API key rotation, no service credentials, no config. Just one HTTP call.
Packaging your script as a worker
Seek API workers can be written in Node.js or Python. You need three files:
worker.yaml — the manifest:
name: "website-tech-detector"
description: "Detect the tech stack of any website"
runtime: python
inputs:
url:
type: string
required: true
description: "The website URL to analyze"
outputs:
cms: string
analytics: array
cdn: string
frameworks: array
pricePerRun: 0.003
main.py — your function:
import httpx
from bs4 import BeautifulSoup
def run(inputs):
url = inputs['url']
response = httpx.get(url, follow_redirects=True, timeout=10)
soup = BeautifulSoup(response.text, 'html.parser')
# detection logic here
return {
"cms": detect_cms(soup),
"analytics": detect_analytics(soup),
"cdn": detect_cdn(response.headers),
"frameworks": detect_frameworks(soup),
}
requirements.txt — your dependencies.
Then deploy:
seek deploy
Your worker is live in 60–90 seconds.
Setting the right price
Pricing a worker involves four factors:
- Run cost — how much compute does it use? A fast SQL query costs less than a headless browser session.
- Uniqueness — if your worker does something nobody else does, you can charge a premium.
- Target audience — individual developers have different price sensitivity than agencies and companies.
- Comparison — what would it cost to build this in-house?
A useful framework: price at 10–20% of the hourly cost to maintain the equivalent solution yourself. A scraper that takes 10 hours/month to maintain at $100/hour = $1,000/month cost. If it runs 50,000 times a month, $0.006/run covers your opportunity cost.
Revenue expectations
Real data from the marketplace:
| Worker type | Avg price/run | Typical monthly runs | Monthly earnings (70%) |
|---|---|---|---|
| Email validator | $0.001 | 250,000 | $175 |
| LinkedIn enricher | $0.005 | 25,000 | $87.50 |
| SEO auditor | $0.012 | 8,000 | $67.20 |
| AI article generator | $0.018 | 15,000 | $189 |
| Data extractor | $0.004 | 60,000 | $168 |
These are median estimates from the platform. Top workers in high-demand categories consistently reach $500–$2,000/month doing nothing after the initial deployment.
The flywheel
Publishing a good worker creates a compound effect:
- Early users leave positive ratings
- Ratings improve search visibility in the marketplace
- More visibility → more users → more revenue
- Revenue justifies maintaining and improving the worker
- Improvements → more ratings → repeat
A well-built worker in a popular category can reach momentum with as few as 200 monthly runs. From there, growth is largely organic.
Your first worker in a weekend
You don’t need to build something novel. Look at what you’ve already built:
- Any script you run regularly on real data is a candidate
- Any scraper you built for a specific use case
- Any transformation or validation function you’ve written more than once
- Any API wrapper you maintain for your own projects
Package it. Deploy it. Set a price. See if it earns.
The worst case: nobody runs it, you spent a Saturday, you learned how workers are deployed. The best case: it runs 100,000 times next year while you’re doing other things.