A working engineer's guide to plugging Coronium 4G/5G mobile proxies into Apify Actors. We cover the 2026 pay-per-event migration, marketplace economics, Python and JavaScript SDK configuration, LlamaIndex and LangChain integrations, and how to ship a Coronium-powered Actor on the Apify Store before the rental model is retired on October 1, 2026.
Apify is the largest marketplace of pre-built web scrapers on the internet plus the cloud infrastructure that runs them. A scraper on Apify is called an Actor: a containerized program with a declared input schema, output dataset, and a billing plan that the Apify platform meters automatically. Users either consume Actors that other developers publish on the Store, or they build and deploy their own Actors and optionally monetize them.
Apify operates on two sides simultaneously. Developers build Actors and monetize them on a marketplace that now contains several thousand public scrapers. End users, from solo operators to Fortune 500 data teams, run those Actors on demand without maintaining any infrastructure. Both sides share the same cloud runtime: Kubernetes nodes in multiple regions, a request queue service, key-value stores, dataset storage, and Apify's own residential and datacenter proxy pools.
Run pre-built Actors for Instagram, TikTok, LinkedIn, Amazon, Google Maps, and thousands more
No infrastructure, no captcha solving, no proxy management required
Pay only for what you run: per-event, per-result, or per-usage
Schedule runs, trigger via webhook, export to S3, BigQuery, Zapier, Make
Ship an Actor in Python or JavaScript and publish to the Store
Earn 80% of monthly rental fees or per-event charges
Creator Plan: $500/month of platform usage at $1/month for 6 months
Built-in dataset, request queue, key-value store, proxy, and logging
Apify spent most of 2024 and 2025 running the largest pricing model migration in the company's history. Rental pricing (a flat monthly fee for access to an Actor) was the original model but suffered from a classic subscription problem: users paid even when they did not run the scraper, and developers had no way to differentiate casual lookups from heavy crawls. Pay-per-event (PPE) replaces both problems with discrete billable events.
The developer declares named events in actor.json, then calls Actor.charge(eventName, count) inside the Actor code whenever one happens. Typical events: page-opened, dataset-item-stored, api-call, captcha-solved, image-downloaded.
Simpler: charge a fixed amount per item pushed to the default dataset. Users love the predictability; developers find it harder to price properly when results have radically different cost profiles.
The raw platform model: users pay for compute units, bandwidth, and storage that the Actor consumes. The developer earns nothing extra. Best for internal Actors or open-source community scrapers.
Apify published migration data showing that developers who switched from pay-per-result to pay-per-event saw average revenue per Actor increase because PPE lets them price each expensive event (page loads, captcha solves, external API calls) separately instead of burying them all in one result price. As of early 2026, more than 2,000 Actors on the Store have migrated to PPE voluntarily before the rental deadline forces the rest.
An Apify Actor is a Docker container plus four metadata files that turn raw code into a monetizable product. Understanding the file layout is the single biggest lever for building a scraper that is easy to ship, easy to configure, and easy to price.
| Actor Category | Typical Price | Pricing Model | Why That Price |
|---|---|---|---|
| Generic HTML scraper | $1.00 / 1k results | Per-result | Low compute, datacenter proxy |
| Google Maps scraper | $4.00 / 1k places | Per-result | Residential proxy required |
| LinkedIn profile scraper | $8.00 / 1k profiles | Per-event | Mobile proxy + anti-bot |
| TikTok video scraper | $6.00 / 1k videos | Per-event | 4G proxy ideal for mobile-first API |
| Instagram hashtag scraper | $5.00 / 1k posts | Per-event | Mobile proxy mandatory |
| Amazon product scraper | $3.00 / 1k items | Per-result | Residential proxy sufficient |
| SERP scraper | $2.50 / 1k SERPs | Per-event | Heavy captcha load |
| Twitter / X scraper | $10.00 / 1k tweets | Per-event | API rate limits + mobile proxy |
Apify's Creator Plan grants $500 of platform usage every month for the first six months at a total cost of $1 per month. That covers compute, bandwidth, storage, and Apify Proxy. The catch: you must publish at least one public Actor within those six months, and it must survive a short review.
Apify ships its own proxy product (Apify Proxy) with datacenter, residential, and a limited mobile pool. For most generic scraping jobs Apify Proxy is the path of least resistance. For mobile-first targets, aggressive anti-bot stacks, and anything where CGNAT trust matters, bringing your own Coronium endpoint gives measurably better success rates.
Zero configuration. Enable one flag in actor.json and Apify routes all traffic through its managed pool.
Datacenter pool free on paid plans
Residential pool at $8/GB
Built-in session rotation and sticky IPs
Mobile pool is small and per-country selection is limited
No dedicated-IP option for long sticky sessions
Plug a single Coronium endpoint into ProxyConfiguration and scale to the Actor's concurrency limit without touching Apify's proxy pricing.
Real 4G/5G carrier IPs with full CGNAT trust
Dedicated IP per device (no shared pool)
On-demand IP rotation via HTTP endpoint
Flat monthly pricing, no bandwidth overages
Country-specific endpoints (US, UK, Europe, more)
| Target | Recommended Proxy | Why |
|---|---|---|
| Instagram, TikTok, Facebook mobile | Coronium 4G | Mobile-first endpoints aggressively block datacenter |
| LinkedIn, Indeed, Glassdoor | Coronium 4G | Strict fingerprinting + AS-number trust scoring |
| Google Search / SERP | Coronium 4G or Apify residential | Heavy captcha load; mobile usually wins |
| Amazon, eBay, Walmart | Apify residential fine | Bot defenses tolerate quality residential |
| Generic news, blogs, docs | Apify datacenter | Cheapest option that still works |
| Regional price monitoring | Coronium per-country | Deterministic geo-IP, no VPN fingerprint |
| API endpoints with mTLS | Coronium dedicated IP | Whitelisted IP required |
Every Actor declares its proxy requirements in actor.json. For BYOP, you either hardcode a Coronium endpoint (useful when you ship a Coronium-powered Actor) or you expose a proxy field in input_schema.json so the end user supplies their own. Here is the canonical shape of both.
{
"actorSpecification": 1,
"name": "coronium-powered-scraper",
"version": "0.1",
"buildTag": "latest",
"title": "Coronium Mobile Proxy Scraper",
"description": "Scrapes target URLs through Coronium 4G mobile IPs.",
"dockerfile": "./Dockerfile",
"input": "./input_schema.json",
"storages": {
"dataset": {
"actorSpecification": 1,
"views": {
"default": {
"title": "Results",
"transformation": { "fields": ["url", "title", "scrapedAt"] }
}
}
}
},
"minMemoryMbytes": 512,
"maxMemoryMbytes": 4096,
"defaultRunOptions": {
"build": "latest",
"timeoutSecs": 3600,
"memoryMbytes": 2048
},
"meta": { "templateId": "python-start" },
"pricingInfos": [
{
"pricingModel": "PAY_PER_EVENT",
"pricingPerEvent": {
"actorChargeEvents": {
"page-opened": {
"eventTitle": "Page opened",
"eventDescription": "Charged for each page the Actor visits.",
"eventPriceUsd": 0.0005
},
"dataset-item-stored": {
"eventTitle": "Result stored",
"eventDescription": "Charged for each item pushed to the dataset.",
"eventPriceUsd": 0.002
}
}
}
}
]
}{
"title": "Coronium Scraper Input",
"type": "object",
"schemaVersion": 1,
"properties": {
"startUrls": {
"title": "Start URLs",
"type": "array",
"editor": "requestListSources",
"description": "URLs the Actor will visit first."
},
"proxyConfiguration": {
"title": "Proxy",
"type": "object",
"editor": "proxy",
"description": "Choose Apify Proxy or supply Coronium URLs below.",
"prefill": { "useApifyProxy": false }
},
"coroniumProxyUrls": {
"title": "Coronium proxy URLs (BYOP)",
"type": "array",
"editor": "stringList",
"description": "One or more Coronium endpoints, e.g. http://user:pass@us.coronium.io:30000",
"default": []
},
"coroniumRotateUrl": {
"title": "Coronium rotate endpoint",
"type": "string",
"editor": "textfield",
"description": "Optional HTTP rotation URL provided in your Coronium dashboard.",
"isSecret": true
}
},
"required": ["startUrls"]
}Always mark proxy credentials as isSecret: true. Apify encrypts secret input fields at rest and redacts them from run logs. Users can paste their Coronium username and password without worrying that another Actor developer sees them in a shared workspace.
The Apify SDK for Python ships a ProxyConfiguration helper that accepts a list of proxy URLs and hands out the next one each time you call new_url(). Coronium endpoints fit the interface natively. The snippet below is a working Actor that pulls Coronium URLs from input, rotates for each request, and emits PPE charges.
# main.py - Apify Python SDK with Coronium BYOP
from apify import Actor, ProxyConfiguration
import httpx
from selectolax.parser import HTMLParser
async def main() -> None:
async with Actor:
actor_input = await Actor.get_input() or {}
start_urls = actor_input.get("startUrls", [])
coronium_urls = actor_input.get("coroniumProxyUrls", [])
rotate_url = actor_input.get("coroniumRotateUrl")
# Build a ProxyConfiguration from Coronium endpoints
if coronium_urls:
proxy_config = ProxyConfiguration(proxy_urls=coronium_urls)
else:
proxy_config = await Actor.create_proxy_configuration(
groups=["RESIDENTIAL"],
country_code="US",
)
async with httpx.AsyncClient(timeout=30) as session:
for request in start_urls:
url = request["url"] if isinstance(request, dict) else request
proxy_url = await proxy_config.new_url()
Actor.log.info(f"Fetching {url} via {proxy_url}")
response = await session.get(
url,
proxies={"http://": proxy_url, "https://": proxy_url},
headers={"User-Agent": "Mozilla/5.0 Apify/Coronium"},
)
# Charge one page-opened event
await Actor.charge("page-opened")
if response.status_code == 200:
tree = HTMLParser(response.text)
title = tree.css_first("title").text() if tree.css_first("title") else None
await Actor.push_data({
"url": url,
"title": title,
"status": response.status_code,
"scrapedAt": Actor.datetime_now_iso(),
})
await Actor.charge("dataset-item-stored")
# Optional: rotate Coronium IP for next request
if rotate_url:
try:
await session.get(rotate_url, timeout=10)
except Exception as e:
Actor.log.warning(f"Rotation failed: {e}")
if __name__ == "__main__":
import asyncio
asyncio.run(main())Round-robin URL selection with session stickiness
Automatic retries on proxy errors (502, 504, timeouts)
Secret redaction in logs
Charge deduplication inside a single request
Graceful shutdown on platform SIGTERM
For larger projects swap httpx for Crawlee-Python:
from crawlee.playwright_crawler import PlaywrightCrawler from apify import Actor, ProxyConfiguration proxy_config = ProxyConfiguration(proxy_urls=coronium_urls) crawler = PlaywrightCrawler(proxy_configuration=proxy_config)
The Apify SDK for Python requires Python 3.9 or newer. As of SDK 2.x (current in 2026) async is mandatory; the old synchronous API is deprecated. Use the official base image apify/actor-python:3.12 to avoid cold-start surprises.
The JavaScript SDK mirrors the Python API almost field-for-field. Where it wins is the Crawlee ecosystem: PlaywrightCrawler, PuppeteerCrawler, CheerioCrawler, and JSDOMCrawler all accept a ProxyConfiguration instance directly.
// main.js - Apify JS SDK + Crawlee + Coronium BYOP
import { Actor } from 'apify';
import { PlaywrightCrawler, ProxyConfiguration } from 'crawlee';
await Actor.init();
const input = await Actor.getInput() ?? {};
const {
startUrls = [],
coroniumProxyUrls = [],
coroniumRotateUrl,
maxConcurrency = 5,
} = input;
// Build the proxy configuration from Coronium endpoints
const proxyConfiguration = coroniumProxyUrls.length > 0
? new ProxyConfiguration({ proxyUrls: coroniumProxyUrls })
: await Actor.createProxyConfiguration({
groups: ['RESIDENTIAL'],
countryCode: 'US',
});
const crawler = new PlaywrightCrawler({
proxyConfiguration,
maxConcurrency,
launchContext: {
launchOptions: {
headless: true,
args: ['--disable-blink-features=AutomationControlled'],
},
},
async requestHandler({ page, request, log }) {
log.info(`Scraping ${request.url}`);
await page.waitForLoadState('domcontentloaded');
const title = await page.title();
const html = await page.content();
await Actor.charge({ eventName: 'page-opened' });
await Actor.pushData({
url: request.url,
title,
htmlLength: html.length,
scrapedAt: new Date().toISOString(),
});
await Actor.charge({ eventName: 'dataset-item-stored' });
// Rotate Coronium IP between requests when an endpoint is supplied
if (coroniumRotateUrl) {
try {
await fetch(coroniumRotateUrl, {
method: 'GET',
signal: AbortSignal.timeout(10_000),
});
} catch (err) {
log.warning(`Coronium rotate failed: ${err.message}`);
}
}
},
failedRequestHandler({ request, log }, error) {
log.error(`${request.url} failed: ${error.message}`);
},
});
await crawler.run(startUrls);
await Actor.exit();import { CheerioCrawler, ProxyConfiguration } from 'crawlee';
import { Actor } from 'apify';
const crawler = new CheerioCrawler({
proxyConfiguration: new ProxyConfiguration({
proxyUrls: [
'http://user:pass@us.coronium.io:30000',
'http://user:pass@us.coronium.io:30001',
'http://user:pass@us.coronium.io:30002',
],
}),
maxRequestsPerMinute: 180, // stay polite
requestHandlerTimeoutSecs: 60,
async requestHandler({ $, request }) {
const title = $('title').text();
const prices = $('.price').map((_, el) => $(el).text()).get();
await Actor.pushData({ url: request.url, title, prices });
await Actor.charge({ eventName: 'page-opened' });
},
});
await crawler.run(startUrls);apify/actor-node-playwright-chrome:20apify run -pLlamaIndex ships an official ApifyActor loader that runs an Apify Actor on demand and feeds its dataset into a VectorStoreIndex. Combined with a Coronium-powered Actor you get a fully auditable RAG pipeline: the retrieval layer sees real mobile IPs, and LlamaIndex handles chunking, embedding, and retrieval.
# pip install llama-index apify-client
from llama_index.core import VectorStoreIndex, Document
from llama_index.readers.apify import ApifyActor
# Run the Coronium-powered scraper as an Actor
reader = ApifyActor("your-username/coronium-powered-scraper")
documents = reader.load_data(
actor_id="your-username/coronium-powered-scraper",
run_input={
"startUrls": [
{"url": "https://example.com/docs"},
{"url": "https://example.com/blog"},
],
"coroniumProxyUrls": [
"http://user:pass@us.coronium.io:30000",
],
},
dataset_mapping_function=lambda item: Document(
text=item.get("htmlContent") or item.get("title", ""),
metadata={
"url": item.get("url"),
"scraped_at": item.get("scrapedAt"),
"source": "coronium-apify",
},
),
)
# Build a vector index over the freshly scraped content
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(similarity_top_k=5)
answer = query_engine.query("What changed in the pricing section last week?")
print(answer)LangChain exposes two helpers for Apify: ApifyWrapper that triggers an Actor run and ApifyDatasetLoader that pulls results from an existing dataset ID. Both produce LangChain Document objects suitable for chains, agents, and retrievers.
# pip install langchain langchain-apify apify-client
from langchain.indexes import VectorstoreIndexCreator
from langchain_apify import ApifyWrapper
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
apify = ApifyWrapper()
loader = apify.call_actor(
actor_id="your-username/coronium-powered-scraper",
run_input={
"startUrls": [{"url": "https://news.example.com/latest"}],
"coroniumProxyUrls": [
"http://user:pass@us.coronium.io:30000",
"http://user:pass@us.coronium.io:30001",
],
},
dataset_mapping_function=lambda item: Document(
page_content=item["htmlContent"] or item["title"],
metadata={
"url": item["url"],
"source": "coronium-apify",
"scraped_at": item["scrapedAt"],
},
),
)
# Vector index
index = VectorstoreIndexCreator(
embedding=OpenAIEmbeddings(model="text-embedding-3-large"),
).from_loaders([loader])
result = index.query("Summarize today's top stories.")
print(result)from langchain_apify import ApifyDatasetLoader
# Already ran the Actor? Load the dataset directly.
loader = ApifyDatasetLoader(
dataset_id="AbCdEfGhIj1234567",
dataset_mapping_function=lambda x: Document(
page_content=x["html"] or "",
metadata={"url": x["url"]},
),
)
docs = loader.load()
print(f"Loaded {len(docs)} documents from Coronium-scraped dataset")memory_mbytes=4096 on long-running Actor calls to avoid OOM during large retrievalsbuild="latest" only in dev; pin a specific build tag in production for reproducibilityThe fastest path to shipping a Coronium-powered Actor on the Apify Store: use the official template, wire in the Coronium endpoint as an input, declare PPE events in actor.json, and push. Apify's build system compiles the Dockerfile in the cloud, so you do not need a local Docker daemon at all.
# 1. Install the Apify CLI
npm install -g apify-cli
# 2. Log in (opens browser to fetch token)
apify login
# 3. Create a new Actor from template
apify create coronium-powered-scraper --template=python-crawlee-playwright
# 4. Move into the folder
cd coronium-powered-scraper
# 5. Edit .actor/actor.json to add PPE events (see earlier snippet)
# 6. Edit .actor/input_schema.json to add coroniumProxyUrls
# 7. Edit src/main.py to use ProxyConfiguration (see Python section)
# 8. Test locally with a real Coronium endpoint
apify run --purge --input='{
"startUrls": [{"url": "https://example.com"}],
"coroniumProxyUrls": ["http://user:pass@us.coronium.io:30000"]
}'
# 9. Push to Apify Cloud
apify push
# 10. (Optional) Publish to the Store
# - Add screenshots, icon, category
# - Submit for review
# - Typical approval: 1-3 business daysPPE, pay-per-result, or pay-per-usage pricing (no rental)
Clear README with at least one working example
Icon (512x512 PNG) and 3+ screenshots
Input schema with descriptions on every field
Sample dataset output (at least 5 rows)
Stripe Connect or Wise payout account
Video demo embedded in README
Free-tier quota (e.g., 100 results free per user)
Integration examples with Zapier, Make, Sheets
Separate PPE events for expensive operations
Performance badge ("Scraped 10M+ pages in 2025")
Proxy comparison table in README
When you bake a Coronium subscription into a Store Actor, the Actor's price must cover both Apify platform fees and your Coronium cost. A simple model:
Flat-rate 4G and 5G endpoints, dedicated IPs, on-demand rotation, and per-country targeting. Drop them into any Actor's ProxyConfiguration in one line.
Select from 10+ countries with real mobile carrier IPs and flexible billing options
Choose Billing Period
Select the billing cycle that works best for you
SELECT LOCATION
when you order 5+ proxy ports
Carrier & Region
Available regions:
Included Features
AT&T โข Florida โข Monthly Plan
Your price:
$129
/month
Unlimited Bandwidth
No commitment โข Cancel anytime โข Purchase guide
Perfect For
Popular Proxy Locations
Secure payment methods accepted: Credit Card, PayPal, Bitcoin, and more. 2 free modem replacements per 24h.
Whether you are migrating a rental Actor before October 2026 or building your first pay-per-event scraper, Coronium mobile proxies plug into the Apify SDK in one line of ProxyConfiguration. Get an endpoint, copy the snippets above, and ship before the migration deadline lands.