Connect ScraperCity to your CRM, cold email tool, AI agent, or automation platform. Pick a guide below.

Scrape leads, find emails, and validate contacts from inside Cursor. Add ScraperCity as an MCP server and use Composer to pull B2B data with natural language.

Scrape leads, validate emails, and export CSVs from your terminal with natural language. Claude Code calls the ScraperCity API and handles everything automatically.

Create HubSpot contacts from scraped Apollo or Google Maps leads. Map emails, phones, titles, and company data to HubSpot properties automatically.

Build a Custom GPT that searches the Lead Database by title, industry, and location using natural language inside ChatGPT.

Add an HTTP API enrichment column to enrich your Clay tables with B2B contact data. Find emails, validate contacts, and pull leads for every row.

Trigger a Webhooks by Zapier action to query leads and route them to Google Sheets, HubSpot, Salesforce, or 6,000+ apps.

Use the HTTP Request node with built-in pagination to pull leads into any n8n workflow and push them to your CRM or outreach tools.

Pull scraped leads and push them directly into Instantly campaigns via the V2 API. Leads start receiving sequences within minutes.

Push scraped leads into GoHighLevel as contacts with custom fields. Trigger GHL workflows for nurture sequences, pipeline stages, and team notifications.

Build a searchable lead database inside Airtable. Push scraped contacts as rows with name, email, phone, title, company, and LinkedIn URL.

Add ScraperCity as an MCP server in Copilot CLI or VS Code. Scrape leads, find emails, and commit results in one git-native workflow.

Connect ScraperCity as a local MCP connector in Perplexity. Pull live B2B data into your research sessions on Mac.

Fill your Pipedrive sales pipeline with scraped leads. Create persons and deals from Apollo contacts or Google Maps businesses.

Connect ScraperCity to Google Gemini CLI as an MCP server. Also works with Gemini Code Assist in VS Code and Firebase Studio.

Add ScraperCity as an MCP server in Windsurf. Use Cascade to scrape leads, find emails, and validate contacts from inside the editor.

Push scraped leads into Smartlead campaigns via the API. Send up to 100 leads per request with custom fields for merge tags.

Scrape leads and push them into Lemlist multichannel campaigns. Email, LinkedIn, and phone steps all personalized with scraped contact data.

Build lead scraping automations with full code access. Paginate through ScraperCity results and route leads to any of 2,000+ apps.

Export scraped leads to Google Sheets automatically. Build a lead spreadsheet that fills itself on a schedule via Zapier or n8n.

Add ScraperCity as an MCP server in the Cline VS Code extension. Scrape leads, find emails, and validate contacts from inside your editor.

Import ScraperCity leads into EmailBison campaigns through their REST API. Attach leads to sequences running on dedicated IP pools.
The full guide to connecting ScraperCity to any AI agent via MCP server, CLI, or skill file. Covers every compatible tool.
Scrape leads from Apollo or Google Maps, validate emails, and push contacts into Instantly, Smartlead, or EmailBison campaigns automatically.
Feed a list of target company domains into Clay or n8n and get back decision-maker contacts with verified emails for every account.
Give Claude Code, Gemini CLI, or Copilot access to every ScraperCity scraper via MCP. The agent chains scrapes, validation, and exports in one prompt.
Set up n8n or Zapier workflows that pull fresh leads daily and route them to Google Sheets, HubSpot, Salesforce, or Slack.
Every integration on this page is powered by the same REST API. Base URL: https://app.scrapercity.com/api/v1. All requests use Bearer token authentication. Get your API key at app.scrapercity.com/dashboard/api-docs.
Every request requires your API key as a Bearer token. Keep it in an environment variable - never hard-code it in client-side code.
curl -H "Authorization: Bearer $SCRAPERCITY_API_KEY" \
https://app.scrapercity.com/api/v1/walletEvery scrape is async. POST to the relevant endpoint, capture the runId from the response, then poll or use a webhook to know when results are ready.
# Start a Google Maps scrape
curl -X POST https://app.scrapercity.com/api/v1/scrape/maps \
-H "Authorization: Bearer $SCRAPERCITY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"searchStringsArray": ["plumbers"],
"locationQuery": "Denver, CO",
"maxCrawledPlacesPerSearch": 200
}'
# Response: { "runId": "abc123", "message": "Maps scrape started" }Check the status endpoint until status returns SUCCEEDED. Most scrapers complete in 1-30 minutes. Apollo takes 11-48+ hours - use webhooks for those runs.
curl https://app.scrapercity.com/api/v1/scrape/status/abc123 \
-H "Authorization: Bearer $SCRAPERCITY_API_KEY"
# Response: { "status": "SUCCEEDED", "count": 200 }Once status is SUCCEEDED, download the CSV. The download endpoint streams the file directly - pipe it to a file or pass the URL to your automation tool.
curl -O https://app.scrapercity.com/api/downloads/abc123 \
-H "Authorization: Bearer $SCRAPERCITY_API_KEY"
# Saves leads.csv with name, email, phone, title, company, LinkedIn URLThe ScraperCity Email Validator is a purpose-built email validation API for B2B cold email workflows. Submit a batch of email addresses and receive a deliverability verdict, catch-all flag, MX record check, and quality rating (high / medium / low) for each one. Validation costs $0.0036 per email and completes in 1-10 minutes.
Unlike standalone email verification tools, the Email Validator lives inside the same API you use to scrape leads - so you can chain a scrape and a validation pass in one n8n workflow or Clay table without switching providers or managing separate API keys.
curl -X POST https://app.scrapercity.com/api/v1/scrape/email-validator \
-H "Authorization: Bearer $SCRAPERCITY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"emails": [
"[email protected]",
"[email protected]",
"[email protected]"
]
}'
# Response: { "runId": "val_xyz", "message": "Validation started for 3 emails" }After polling to SUCCEEDED, the downloaded CSV includes one row per email with email, status, quality, is_catch_all, and mx_found columns. Filter to quality = high before loading into any cold email platform.
Lead enrichment takes a partial contact record and appends the missing fields. ScraperCity covers this with three complementary endpoints that work well in sequence inside any automation tool.
| Endpoint | Input | Output | Cost | Speed |
|---|---|---|---|---|
| Email Finder | First name, last name, company domain | Verified business email | $0.05/contact | 1-10 min |
| Email Validator | Email address | Deliverability, quality, catch-all, MX | $0.0036/email | 1-10 min |
| Mobile Finder | LinkedIn URL or email address | Mobile phone number | $0.25/input | 1-5 min |
| People Finder | Name, email, phone, or address | Full contact profile, relatives, addresses | $0.02/result | 2-10 min |
curl -X POST https://app.scrapercity.com/api/v1/scrape/email-finder \
-H "Authorization: Bearer $SCRAPERCITY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contacts": [
{ "firstName": "Jane", "lastName": "Doe", "domain": "acme.com" },
{ "firstName": "Bob", "lastName": "Smith", "domain": "widgetco.io" }
]
}'
# Response: { "runId": "ef_abc", "message": "Email finder started for 2 contacts" }firstName, lastName, and domain.ScraperCity is a B2B data API with 25 endpoints under one key and one subscription. Every plan from $49/mo includes API access to all scrapers. Below is a summary of the full catalog organized by use case.
All endpoints share the same async pattern: POST to start, GET status to poll, GET download to retrieve results. The Lead Database endpoint is the only synchronous endpoint - it returns paginated results immediately (100 leads/page, 100,000 leads/day limit) and requires the $649/mo plan.
ScraperCity ships an MCP (Model Context Protocol) server that exposes every scraper as a callable tool inside any compatible AI agent. Add the config block below to your MCP client and your agent can scrape leads, find emails, validate contacts, and export CSVs using natural language - no manual API calls required.
Compatible clients: Claude Code, Cursor, Windsurf, GitHub Copilot (VS Code), Cline, Gemini CLI, Perplexity (Mac), and any MCP-compatible agent.
{
"mcpServers": {
"scrapercity": {
"command": "npx",
"args": ["-y", "--package", "scrapercity", "scrapercity-mcp"],
"env": { "SCRAPERCITY_API_KEY": "your_api_key_here" }
}
}
}Add this block to your agent's config file (e.g. ~/.claude/claude_desktop_config.json for Claude Code, or .cursor/mcp.json for Cursor). Then prompt your agent to use ScraperCity directly.
npx scrapercity login # authenticate with your API key
npx scrapercity wallet # check credit balance
npx scrapercity maps -q "plumbers" -l "Denver, CO" --limit 200
npx scrapercity poll <runId> # wait for results
npx scrapercity download <runId> -o leads.csvYour API key is missing, expired, or malformed. Confirm the header is exactly Authorization: Bearer YOUR_API_KEY with no extra whitespace. Regenerate your key at app.scrapercity.com/dashboard/api-docs if needed.
ScraperCity blocks identical requests within 30 seconds to prevent duplicate charges. If you are retrying after a timeout, wait at least 30 seconds before resubmitting. In n8n or Zapier, add a 35-second delay node between a failed attempt and a retry.
Apollo scrapes take 11-48+ hours. All other scrapers complete in 1-30 minutes. If a non-Apollo run stays in PENDING beyond 45 minutes, check your credit balance - runs will not start if your wallet is insufficient. Check balance with GET /wallet or npx scrapercity wallet.
The Lead Database (GET /database/leads) requires the $649/mo plan specifically. It is not available on the $49/mo or $149/mo plans. Upgrade your plan at app.scrapercity.com if you need instant access to the 4.6M+ contact database.
Set the response format to "JSON" in the n8n HTTP Request node options. If you are downloading a CSV, switch the response format to "File" and pass the file binary to a Write Binary File node or Google Sheets import node. Confirm your Content-Type: application/json header is set on POST requests.
Make sure Node.js 18+ is installed and npx is on your PATH. Confirm the config file is valid JSON and the SCRAPERCITY_API_KEY value is your actual key, not the placeholder. Restart the editor after saving the config.
Clay's HTTP API enrichment column runs synchronously but ScraperCity's scrapers are async. Use the Email Validator and Email Finder endpoints - these are the fastest (1-10 min) and return results that Clay can poll for. Set your Clay request timeout to at least 15 minutes and map the runId from the initial response to a second column that polls the status endpoint.
Apollo scrapes take 11-48+ hours. Polling the status endpoint every 5 minutes wastes requests and can trip duplicate-request protection. Configure a webhook at app.scrapercity.com/dashboard/webhooks and let ScraperCity POST to your n8n or Zapier webhook trigger when the run succeeds.
Mobile lookups cost $0.25/input. Run the Email Validator first and filter to high-quality addresses before passing emails to the Mobile Finder. This avoids spending $0.25 on a contact whose email is already invalid.
The Lead Database returns 100 leads per page with a 100,000 lead/day limit. In n8n, use a loop node that increments a page parameter and stops when the returned array length is less than 100. In Pipedream, use a standard while loop with an async await on each paginated GET.
ScraperCity blocks duplicate scrape requests within 30 seconds but does not deduplicate contacts across different runs. Before pushing to HubSpot, Pipedrive, or GoHighLevel, check for existing contacts by email using the CRM's search API to avoid creating duplicate records.
For large Google Maps or Apollo jobs, trigger scrapes overnight with a scheduled n8n or Zapier workflow. Results will be ready by morning and you avoid any perceived slowdowns during peak hours on external data sources.
Completed run CSVs are available for redownload using the same runId. Log runIds to a Google Sheet or Airtable table so you can redownload any run without re-paying scraping credits.
ScraperCity is a B2B data platform with a full REST API covering 25 tools under a single authentication key. Scrape Apollo lead lists ($0.0039/lead, 11-48+ hours), extract Google Maps businesses ($0.01/place, 5-30 min), find business emails ($0.05/contact, 1-10 min), validate email deliverability ($0.0036/email, 1-10 min), look up mobile phone numbers ($0.25/input, 1-5 min), run skip traces ($0.02/result, 2-10 min), and pull ecommerce store data instantly. Every tool works on every plan starting at $49/mo.
For AI agents, ScraperCity ships an MCP server compatible with Claude Code, Cursor, Windsurf, Gemini CLI, GitHub Copilot, Cline, Perplexity, and any MCP client. For automation, connect via Clay, n8n, Zapier, or Pipedream. For CRMs, push leads directly to HubSpot, GoHighLevel, Pipedrive, or Airtable. For cold email, load contacts into Instantly, Smartlead, Lemlist, or EmailBison.
The async pattern is consistent across all 25 endpoints: POST to start a run and receive a runId, GET the status endpoint to poll for completion, then GET the download endpoint to retrieve a CSV. The Lead Database is the only synchronous endpoint - it returns paginated JSON immediately on every GET. This consistent design means any integration you build for one scraper works for all of them.