ScraperCity logo

Integration Guide

ScraperCity + n8n

Call any ScraperCity API endpoint from n8n using the HTTP Request node. Pull leads from the Lead Database, run Apollo scrapes, validate emails, or find contacts - then route results anywhere in a single workflow.

Why Connect ScraperCity to n8n

n8n is a workflow automation tool that gives technical teams the flexibility of code with the speed of no-code. Its HTTP Request node can call any REST API - including every ScraperCity endpoint - without a dedicated integration. That means you get access to 25 B2B data tools inside your existing automation stack from day one.

The combination is particularly effective for lead generation teams. ScraperCity handles the data acquisition - pulling B2B contacts, local business listings, enriched company data, or validated email lists. n8n handles the orchestration: scheduling the pull, paginating through large result sets, transforming fields, deduplicating records, and routing each lead to the right downstream tool. The whole pipeline runs unattended on whatever schedule you set.

Common destinations after the ScraperCity pull include Google Sheets (for review and export), HubSpot or Salesforce (for CRM entry), Instantly or Smartlead (for cold email sequences), Slack (for real-time team notifications), and EmailBison or similar tools for outreach automation. Any app with an n8n node or a REST API can receive the results.

Workflow Overview

Schedule TriggerHTTP RequestSplit Out ItemsGoogle Sheets / CRM / Email Tool

The example below uses the Lead Database endpoint. The same HTTP Request approach works for any ScraperCity endpoint. Split Out Items breaks the response array into individual records, then you route each one to your destination.

Daily Lead Pipeline

Schedule Trigger fires each morning. HTTP Request pulls fresh CTOs or VPs matching your ICP. Split Out and push to your CRM or cold email tool automatically.

Email Validation Loop

Pull a list from the Lead Database, split records, then call the Email Validator on each address. Route only deliverable contacts to your outreach sequence.

Local Business Prospecting

Trigger a Google Maps scrape by city and category. Split results, filter by review count or rating with an If node, and post matches to a Slack channel or Google Sheet.

Setup

1

Get your API key

Log in to your ScraperCity account and go to app.scrapercity.com/dashboard/api-docs. Copy your API key from the top of that page. Keep it safe - you will add it to n8n as a credential in the next step.

All ScraperCity plans ($49, $149, and $649/mo) include API access. The Lead Database endpoint used in this guide requires the $649/mo plan. Every other endpoint works on all plans.

2

Create a credential for your API key

In n8n, go to Credentials and create a new one:

Credential Type:Header Auth
Name:Authorization
Value:Bearer YOUR_API_KEY

Using Header Auth instead of the predefined Bearer type ensures the authorization header is included on every paginated request automatically. Replace YOUR_API_KEY with the key you copied in Step 1.

3

Add an HTTP Request node

Method:GET
URL:https://app.scrapercity.com/api/v1/database/leads
Authentication:Generic Credential Type > Header Auth

Select the credential you created in Step 2 from the dropdown. The Authorization header is injected on every request including paginated follow-up calls.

4

Add query parameters

Under “Send Query Parameters,” add your search filters:

title = CTO
country = United States
hasEmail = true
limit = 100
page = 1

You can use any combination: title, seniority, department, industry, country, state, city, companyName, companyDomain, companySize, minEmployees, maxEmployees, hasEmail, hasPhone. The Lead Database contains 4.6M+ B2B contacts and returns up to 100 records per page, with a 100,000 lead per day limit across your account.

5

Enable pagination

Click “Add Option” and select “Pagination.” Configure it:

Pagination Mode:Update a Parameter in Each Request
Type:Query
Name:page
Value:{{ $pageCount + 1 }}

Set the stop condition:

Max Pages:{{ $response.body.pagination.totalPages }}

$pageCount starts at 0. Adding 1 means the first request sends page=1, the second sends page=2, and so on until totalPages is reached. n8n stops automatically - no manual loop or If node required.

6

Split the response array

After the HTTP Request node, add a “Split Out” node:

Field To Split Out:data

Each lead becomes a separate item. Downstream nodes receive fields like email, first_name, title, company_name, linkedin_url. You can set “Include Other Fields” to “All Other Fields” if you need pagination metadata alongside each record.

7

Route leads to your destination

Connect the Split Out node to wherever you want the leads. Common destinations:

  • Google Sheets - append each lead as a new row for manual review or export
  • HubSpot / Salesforce - create or update contacts directly in your CRM
  • Instantly / Smartlead / EmailBison - add to a cold email campaign sequence
  • Slack - post a summary message when high-value leads are found
  • Airtable / Notion - build a research database with enriched records
  • Webhook - forward to any external system or internal API
8

Switch to a Schedule Trigger

Replace the Manual Trigger with a Schedule Trigger node to run the workflow automatically:

Trigger Interval:Days
Days Between Triggers:1
Trigger at Hour:8am

You can also use a custom Cron expression for more precise timing. After configuring the Schedule Trigger, publish (activate) the workflow - schedules only take effect once the workflow is active. If the trigger fires at the wrong time, check the timezone setting in your workflow settings or instance configuration.

Available Fields per Lead

After the Split Out node, reference fields with expressions like {{ $json.email }}. All fields below are available on Lead Database results:

first_namelast_namefull_nameemailpersonal_emailmobile_numbertitleheadlinesenioritydepartmentcompany_namecompany_domaincompany_industrycompany_sizecompany_linkedinlinkedin_urlcitystatecountry

Other scrapers return different field sets. Apollo results include profile data, job history, and company enrichment. Google Maps results include business name, address, phone, website, rating, and review count. Email Validator results include deliverable status, catch-all flag, and MX record details.

Advanced: Email Validation Inside n8n

Once you have leads flowing through n8n, you can validate each email address in the same workflow using the ScraperCity Email Validator. This is useful when working with scraped data or older lists where deliverability may vary.

After your Split Out node, add a second HTTP Request node configured as follows:

Method:POST
URL:https://app.scrapercity.com/api/v1/email-validator
Authentication:Generic Credential Type > Header Auth (same credential)

Under Send Body, add the email field as JSON:

{
  "email": "{{ $json.email }}"
}

Email validation runs asynchronously. The response includes a runId. Poll the Status endpoint until the job is complete, then call the Download endpoint to retrieve results. Alternatively, configure your ScraperCity webhook at app.scrapercity.com/dashboard/webhooks to POST results back to an n8n Webhook Trigger node when each validation finishes - this avoids polling entirely.

Email validation costs $0.0036 per address and typically completes within 1-10 minutes. Use an If node downstream to filter on deliverable status before passing leads to your outreach tool.

Other ScraperCity Endpoints in n8n

The HTTP Request node + Header Auth credential you set up works for all 25 ScraperCity tools. Here are common use cases beyond the Lead Database:

Apollo Scraper

$0.0039/lead
POST /api/v1/apollo

Pull B2B contacts by job title, industry, and location. Results deliver in 11-48+ hours. Use a Webhook Trigger to receive notification when the scrape is ready, then call the Download endpoint to fetch the CSV.

Google Maps

$0.01/place
POST /api/v1/google-maps

Scrape local business listings with phones, emails, addresses, ratings, and review counts. Results in 5-30 minutes. Good for local lead gen or competitive research workflows.

Email Finder

$0.05/contact
POST /api/v1/email-finder

Find a verified business email from a name and company domain. Use after an Apollo scrape or LinkedIn export to fill gaps in your contact data.

Mobile Finder

$0.25/input
POST /api/v1/mobile-finder

Retrieve mobile phone numbers from a LinkedIn URL or email address. Add after Lead Database pulls to enrich records with direct dial numbers.

Store Leads

$0.0039/lead
POST /api/v1/store-leads

Find Shopify and WooCommerce stores by category, country, or revenue range. Results are instant. Good for e-commerce agency prospecting workflows.

Email Validator

$0.0036/email
POST /api/v1/email-validator

Verify deliverability, check catch-all status, and validate MX records. Run on any email list before pushing to your outreach tool to keep bounce rates low.

All endpoints use the same base URL (https://app.scrapercity.com/api/v1) and the same Bearer token credential. Async scrapers return a runId you can poll with the Status endpoint or receive via webhook when complete.

Troubleshooting Common Issues

401 Unauthorized

Cause: The API key is missing, malformed, or has been rotated.

Fix: Open the Header Auth credential in n8n and confirm the Value field is exactly Bearer YOUR_API_KEY with a space between Bearer and the key. Regenerate your key at app.scrapercity.com/dashboard/api-docs if needed.

403 Forbidden

Cause: Your plan does not include the endpoint you are calling, or the credential is using the wrong header name.

Fix: Confirm the Lead Database endpoint requires the $649/mo plan. For other endpoints, verify the credential Name field is Authorization (capital A) and the Value starts with Bearer. Check that you selected Generic Credential Type > Header Auth, not a different auth option.

Pagination only returns the first page

Cause: The page query parameter in the main node settings conflicts with the pagination config, or the pagination is set under Options but the query parameter is also hardcoded.

Fix: Remove the hardcoded page = 1 entry from your query parameters once pagination is enabled. The pagination config manages the page value automatically using $pageCount + 1. Having both set causes the hardcoded value to override the paginator.

Schedule Trigger not firing at the right time

Cause: The workflow timezone does not match your local timezone.

Fix: On n8n Cloud, set the timezone in the Admin dashboard. On self-hosted instances, set the GENERIC_TIMEZONE environment variable (for example: GENERIC_TIMEZONE=America/New_York). You can also override per-workflow: open the workflow canvas, click the three-dot menu, select Settings, and change the Timezone field.

Schedule Trigger saved but not running

Cause: The workflow is saved but not published (active). Schedule changes only take effect after you activate the workflow.

Fix: Toggle the workflow to Active using the switch in the top-right corner of the canvas. If you edited the trigger interval after activation, deactivate and re-activate the workflow to apply the new schedule.

Split Out node produces no output or wrong structure

Cause: The response array is nested differently than expected, or pagination flattened items into a single object.

Fix: Run the HTTP Request node manually first (without pagination) and inspect the Output panel. Confirm the array is at $json.data. If the API returns the array at a different key, update Field To Split Out to match. Enable "Include Other Fields: All Other Fields" if downstream nodes need pagination metadata.

Duplicate protection error (30-second block)

Cause: You re-ran an identical request within 30 seconds.

Fix: ScraperCity blocks identical API requests submitted within 30 seconds of each other to prevent accidental duplicate charges. Wait 30 seconds before re-running, or change at least one query parameter to make the request unique.

Performance Tips

Use limit=100 for every paginated request

The Lead Database returns up to 100 records per page. Always set limit=100 in your query parameters to minimize the number of API calls and reduce total execution time. Smaller page sizes require more round trips for the same result set.

Filter aggressively before you paginate

Use the available filter parameters - seniority, department, companySize, hasEmail, hasPhone, country - to narrow results before pagination begins. A tighter filter means fewer pages, fewer API calls, and a faster workflow run. You can always broaden filters in a separate workflow variation.

Set a timeout on the HTTP Request node for async scrapers

Apollo scrapes deliver in 11-48+ hours. Do not poll the Status endpoint in a tight loop. Instead, use the ScraperCity webhook feature to POST the result back to an n8n Webhook Trigger node when the scrape is complete. This avoids unnecessary API calls and keeps your workflow execution clean.

Use the If node to filter before writing to your CRM

Add an If node after Split Out to check {{ $json.email !== "" }} or {{ $json.hasEmail === true }} before sending records downstream. Writing only qualified leads to your CRM reduces noise, prevents blank-row entries, and keeps your outreach lists clean.

Store your API key in n8n Credentials, not in node parameters

Always use the Header Auth credential rather than pasting your API key directly into a node URL or parameter field. Credentials are encrypted at rest in n8n and are not exposed in execution logs, making them safe for shared instances and team environments.

Test with Manual Trigger before activating the schedule

Build and test your workflow using a Manual Trigger node first. Confirm pagination is working, the Split Out node produces individual records, and your destination node is receiving the correct fields. Then swap to a Schedule Trigger and activate. This avoids hitting your daily lead limit during debugging.

FAQ

Get API access to ScraperCity: