ScraperCity logo

Integration Guide

ScraperCity + Google Sheets

Build a lead spreadsheet that fills itself. Scrape contacts from Apollo ($0.0039/lead), Google Maps ($0.01/place), or the Lead Database and push them into Google Sheets as new rows - with name, email, phone, title, company, and LinkedIn URL for every contact.

No-code via ZapierFull control with n8nDirect API + PythonPower Automate ready

What You Can Build

Google Sheets is more than a spreadsheet - it is a real-time collaboration layer for your entire sales team. When you pipe ScraperCity data directly into Sheets, you get a living lead database that your reps can open in a browser with no software install, filter with built-in formulas, and share with a link. Here are the most common pipelines teams build with this integration.

Daily cold outreach list

Schedule a Zapier Zap or n8n workflow to run every morning. It queries the ScraperCity Apollo scraper for 100 contacts matching your ICP - title, industry, employee range - and appends them as new rows. Your SDRs start each day with a fresh, filtered list ready to work.

Local business prospecting sheet

Run the Google Maps scraper for a city and keyword combination. Get business name, phone, website, rating, and review count for every matching listing at $0.01/place. Feed it into Sheets for your local sales reps or agency clients.

Lightweight Sheets CRM

Add a Status column (New / Contacted / Replied / Closed) and a Notes column next to your ScraperCity lead data. ScraperCity fills the contact columns automatically; your team updates status and notes manually as they work each lead. No CRM subscription required.

Multi-source enrichment sheet

Start with company domains from one tab. Use the Email Finder scraper ($0.05/contact) to enrich each row with a verified business email, then the Mobile Finder ($0.25/input) for a direct dial. All results land in the same sheet so enrichment is visible alongside the source record.

Competitor customer list

Run the BuiltWith scraper ($4.99/search) to pull every website using a competitor technology. Send those domains into Sheets, then chain Email Finder or Website Finder scrapes to enrich each domain with contact details.

E-commerce store pipeline

The Store Leads scraper finds Shopify and WooCommerce stores at $0.0039/lead. Route results into a dedicated Sheets tab sorted by estimated monthly revenue. Great for agencies pitching e-commerce services.

Three Ways to Connect

Choose the method that fits your technical comfort and how often you need the sheet to update.

1. CSV download (manual)

Run any ScraperCity scrape, download the CSV from your dashboard, and open it in Google Sheets. Simplest approach - no automation needed. Works for one-time list builds.

2. Zapier (no code)

Create a Zap: Schedule trigger, Webhooks by Zapier Custom Request to the ScraperCity API, Google Sheets Create Row action. Map email, name, phone, title, and company to sheet columns. Runs on your schedule.

Best for: non-technical users who want a daily lead sheet without writing code.

3. n8n or Pipedream (full control)

Build a workflow that queries ScraperCity with pagination, deduplicates against existing rows, and appends only new leads. n8n and Pipedream both have native Google Sheets nodes.

Best for: teams that want deduplication, conditional logic, or multi-source lead sheets.

How to Automate Google Sheets with Zapier (Step by Step)

This walkthrough uses the Zapier no-code path. You will end up with a Zap that runs on a schedule, calls the ScraperCity API, and writes every returned lead as a new row in your Google Sheet automatically.

  1. 1

    Prepare your Google Sheet

    Create a new Google Sheet. In row 1, add column headers that match the ScraperCity fields you want to capture: Name, Email, Phone, Title, Company, Industry, LinkedIn, Source, Date Added.

    Zapier requires column headers in row 1 to map API response fields to the correct sheet columns. Without headers, the Create Row action will not know where to place each value.

  2. 2

    Get your ScraperCity API key

    Log in to ScraperCity and navigate to app.scrapercity.com/dashboard/api-docs. Copy your API key. You will paste it into the Zapier Authorization header as a Bearer token in the format Bearer YOUR_API_KEY.

  3. 3

    Create a new Zap with a Schedule trigger

    In Zapier, click Create Zap. Choose Schedule by Zapier as the trigger app. Set the interval - daily at 7am is a common choice for a morning lead sheet. This trigger fires the workflow on your chosen cadence.

  4. 4

    Add a Webhooks by Zapier action to call ScraperCity

    Add a Webhooks by Zapier action. Choose the Custom Request event. Set Method to POST and URL to:

    https://app.scrapercity.com/api/v1/apollo

    Set Headers to:

    Authorization: Bearer YOUR_API_KEY
    Content-Type: application/json

    Set the Data (JSON body) to your search parameters. Example for VP of Sales contacts in SaaS:

    {
      "person_titles": ["VP of Sales"],
      "organization_industry_tag_ids": ["Computer Software"],
      "per_page": 100
    }

    Apollo scrapes are asynchronous and take 11-48+ hours to complete. For a daily Zap, you can trigger the scrape one day and download results the next, or use n8n with a polling loop for a fully automated pipeline.

  5. 5

    Add a Google Sheets - Create Spreadsheet Row action

    Add a Google Sheets action and choose Create Spreadsheet Row. Connect your Google account and select the spreadsheet and worksheet you prepared in step 1. Zapier will load your column headers automatically.

    Map the ScraperCity response fields to your columns:

    Sheet ColumnMap from API response
    Namefull_name
    Emailemail
    Phonemobile_number
    Titletitle
    Companycompany_name
    Industrycompany_industry
    LinkedInlinkedin_url
    Source"Apollo" (static text)
    Date AddedUse Formatter > Date > Now
  6. 6

    Test and activate

    Click Test step in Zapier to verify a row is created in your sheet with the correct data. If the test succeeds, turn the Zap on. From this point your sheet fills automatically on every scheduled run.

Google Sheets Automation Script with n8n (Full Control)

n8n is the best choice when you need pagination, deduplication, or conditional routing. Its native Google Sheets node handles append operations directly - no manual OAuth setup required once you connect your account. Here is the workflow pattern most teams use.

Workflow: Schedule - Scrape - Deduplicate - Append

  1. Cron node - triggers daily at a set time.
  2. HTTP Request node - POST to https://app.scrapercity.com/api/v1/apollo with your Bearer token and search filters. Save the returned runId.
  3. Wait node - pause for 13 hours (minimum Apollo delivery time). For tighter control, use a polling loop with an IF node checking the Status endpoint.
  4. HTTP Request node - GET https://app.scrapercity.com/api/v1/status/{{runId}} to confirm the scrape is complete.
  5. HTTP Request node - GET the Download endpoint to fetch leads as JSON.
  6. Google Sheets node (Read) - pull existing emails from column B of your sheet into memory.
  7. IF / Filter node - compare each new lead's email against the existing list. Drop duplicates.
  8. Google Sheets node (Append Row) - write only net-new leads to the sheet. Set Operation to Append so rows are added below existing data rather than overwriting it.

n8n: Status polling loop pattern

For Apollo scrapes (11-48+ hours), use this loop instead of a fixed Wait node to avoid timeouts:

// n8n Function node - poll until complete
const runId = $node["HTTP Request"].json.runId;
const statusUrl = `https://app.scrapercity.com/api/v1/status/${runId}`;

// Return runId to the next HTTP Request node
return [{ json: { runId, statusUrl } }];

// Connect: HTTP Status node -> IF node
// IF: {{ $json.status === "complete" }} -> proceed
//                                       -> loop back to Wait + HTTP Status

Google Sheets API Tutorial: Python + ScraperCity

If you prefer code, you can connect the ScraperCity API directly to the Google Sheets API using Python. The Google Sheets API is a RESTful interface that lets you read and write spreadsheet data. The spreadsheets.values.append endpoint appends values to the next empty row of an existing table - no row number calculation needed.

Authentication for the Google Sheets API uses OAuth 2.0 with a service account JSON key. For ScraperCity, authentication is a simple Bearer token header. The script below assumes you have already installed google-auth and google-api-python-client.

import requests
import time
from google.oauth2 import service_account
from googleapiclient.discovery import build

# --- Config ---
SCRAPERCITY_API_KEY = "YOUR_SCRAPERCITY_API_KEY"
SPREADSHEET_ID = "YOUR_GOOGLE_SHEET_ID"  # from the sheet URL
SHEET_NAME = "Leads"
SERVICE_ACCOUNT_FILE = "service_account.json"

SCRAPERCITY_BASE = "https://app.scrapercity.com/api/v1"
SCOPES = ["https://www.googleapis.com/auth/spreadsheets"]

# --- Google Sheets client ---
creds = service_account.Credentials.from_service_account_file(
    SERVICE_ACCOUNT_FILE, scopes=SCOPES
)
sheets = build("sheets", "v4", credentials=creds)

# --- Step 1: Trigger a ScraperCity Apollo scrape ---
headers = {
    "Authorization": f"Bearer {SCRAPERCITY_API_KEY}",
    "Content-Type": "application/json",
}
payload = {
    "person_titles": ["VP of Marketing"],
    "organization_industry_tag_ids": ["Computer Software"],
    "per_page": 100,
}
r = requests.post(f"{SCRAPERCITY_BASE}/apollo", json=payload, headers=headers)
run_id = r.json()["runId"]
print(f"Scrape started. runId: {run_id}")

# --- Step 2: Poll until complete (Apollo: 11-48+ hours) ---
while True:
    status_r = requests.get(f"{SCRAPERCITY_BASE}/status/{run_id}", headers=headers)
    status = status_r.json().get("status")
    print(f"Status: {status}")
    if status == "complete":
        break
    time.sleep(1800)  # poll every 30 minutes

# --- Step 3: Download leads ---
dl = requests.get(f"{SCRAPERCITY_BASE}/download/{run_id}", headers=headers)
leads = dl.json().get("data", [])
print(f"Leads returned: {len(leads)}")

# --- Step 4: Append to Google Sheet ---
rows = []
for lead in leads:
    rows.append([
        lead.get("full_name", ""),
        lead.get("email", ""),
        lead.get("mobile_number", ""),
        lead.get("title", ""),
        lead.get("company_name", ""),
        lead.get("company_industry", ""),
        lead.get("linkedin_url", ""),
        "Apollo",
    ])

body = {"values": rows}
sheets.spreadsheets().values().append(
    spreadsheetId=SPREADSHEET_ID,
    range=f"{SHEET_NAME}!A:H",
    valueInputOption="USER_ENTERED",
    insertDataOption="INSERT_ROWS",
    body=body,
).execute()

print(f"Appended {len(rows)} rows to Google Sheets.")

The insertDataOption=INSERT_ROWS parameter tells the Sheets API to insert new rows rather than overwrite existing data. The valueInputOption=USER_ENTERED means values are parsed the same way as if typed manually - dates format correctly and numbers stay as numbers.

Power Automate + Google Sheets + ScraperCity

Microsoft Power Automate supports Google Sheets via a native connector, making it a solid choice for teams already using Microsoft 365. The flow structure mirrors the Zapier approach but uses Power Automate actions.

  1. Recurrence trigger - set the interval (daily, hourly, etc.).
  2. HTTP action - Method: POST. URI: https://app.scrapercity.com/api/v1/apollo. Headers: Authorization: Bearer YOUR_API_KEY. Body: your JSON search filters.
  3. Parse JSON action - use the ScraperCity response schema to extract fields like full_name, email, and company_name.
  4. Apply to each loop - iterate over each lead in the parsed response.
  5. Google Sheets - Add a row action - map the parsed fields to your sheet columns. Power Automate will show your column headers as named fields once you connect your Google account and select the spreadsheet.

Note that Apollo scrapes take 11-48+ hours. Build a two-flow pattern: Flow 1 triggers the scrape and stores the runId. Flow 2 runs on a delay, polls the Status endpoint, and writes results to Sheets once the status is complete.

Suggested Sheet Structure

Create a Google Sheet with these columns. Each ScraperCity lead maps directly to a row:

ColumnScraperCity FieldExample
Namefull_nameJane Smith
Emailemail[email protected]
Phonemobile_number+15551234567
TitletitleVP of Sales
Companycompany_nameAcme Corp
Industrycompany_industryComputer Software
LinkedInlinkedin_urllinkedin.com/in/janesmith
Websitecompany_websiteacmecorp.com
Employeescompany_employee_count50-200
Source(your label)Apollo / Maps / Lead DB
Date Added(auto)2026-03-15
Status(manual)New / Contacted / Replied
Notes(manual)Left voicemail

Keep the Status and Notes columns at the end and to the right of any auto-populated columns. This way your automation always appends to the correct range without overwriting manual entries.

Building a Multi-Source Lead Sheet

One of the most powerful patterns is combining multiple ScraperCity scrapers into a single Google Sheet with a Source column that tells you where each lead came from. This gives your team one master list regardless of whether the lead originated from an Apollo search, a Google Maps query, a Yelp scrape, or a Lead Database query.

The easiest way to set this up is with separate tabs per source and a master tab that uses =IMPORTRANGE() or simply a combined query to pull rows from each source tab into one view. Alternatively, route all scrapers to the same tab in your automation and use the Source field to distinguish them.

Example: Apollo + Google Maps combined sheet

Tabs: Apollo Contacts | Maps Businesses | Master

  • Apollo tab: full_name, title, email, company_name, linkedin_url - populated by daily Zap
  • Maps tab: place_name, address, phone, website, rating, review_count - populated by weekly Zap
  • Master tab: =QUERY({Apollo_Contacts!A:I; Maps_Businesses!A:I}) - combined view for filtering and outreach

Troubleshooting Common Issues

The majority of issues with Google Sheets automation scripts fall into three categories: authentication errors, timing problems with async scrapes, and sheet configuration mistakes. Here is how to resolve the most common ones.

Problem: Zapier or n8n returns a 401 Unauthorized error from ScraperCity

Fix: Your Authorization header is missing or malformed. It must be exactly "Authorization: Bearer YOUR_API_KEY" with no extra spaces or quotes around the token. Copy your key fresh from app.scrapercity.com/dashboard/api-docs and paste it directly.

Problem: Google Sheets action fails with "Unable to find a spreadsheet"

Fix: Re-authenticate your Google account in Zapier or n8n. Also confirm the email address connected to your automation has edit access to the target spreadsheet - view-only access is not enough for the Create Row action.

Problem: Rows appear in the wrong columns

Fix: Zapier and n8n map fields to columns by header name. Make sure row 1 of your sheet contains the exact column names you are mapping in the automation. A missing header or a typo in the header will cause values to land in the wrong place or be dropped.

Problem: Apollo scrape shows "pending" when I try to download results

Fix: Apollo scrapes take 11-48+ hours to complete. Do not attempt to download results immediately after triggering. Always poll the Status endpoint first. In Zapier, use a multi-step Zap with a Delay action or split the trigger and download into two separate Zaps running on offset schedules.

Problem: Duplicate leads appearing in my sheet

Fix: ScraperCity blocks identical requests within 30 seconds, but different search parameters can return overlapping contacts. Add a deduplication step in n8n (using a Filter node checking against existing emails) or use a Google Sheets COUNTIF formula in a helper column to flag duplicates: =IF(COUNTIF($B$2:B2, B2)>1, "Duplicate", "New").

Problem: My automation hits the Lead Database but gets no results

Fix: The Lead Database endpoint (GET /database/leads) requires the $649/mo plan. If you are on the $49 or $149 plan, use the Apollo scraper instead. Also check that your query filters are not too narrow - start with a single filter and expand once you confirm results are returning.

Problem: Python script throws a Google Sheets API quota error

Fix: The Google Sheets API has a limit of 300 write requests per minute per project. If you are appending large batches, use a single batch append with all rows in one API call rather than one call per row. The values.append endpoint accepts an array of row arrays in a single request, which is far more efficient.

Performance Tips

Once your automation is running, these practices will keep it fast, cheap, and reliable at scale.

Batch your Sheets writes

Always append all leads in a single API call using an array of row arrays. Calling the Sheets API once per lead is slow and burns quota. One call with 500 rows is better than 500 calls with 1 row each.

Use 100 leads per page

The Lead Database returns a maximum of 100 leads per page. Set per_page to 100 and paginate with the page parameter to minimize the number of API calls needed for large datasets.

Freeze the header row

Freeze row 1 in your Google Sheet so headers stay visible as the sheet grows. Go to View > Freeze > 1 row. This prevents accidental overwriting of headers by automation tools that append starting from row 1.

Run Apollo scrapes at off-peak hours

Apollo scrapes run in a queue and deliver faster when submitted outside peak hours. Triggering at midnight or early morning often results in delivery closer to the 11-hour minimum rather than the 48-hour maximum.

Archive old rows monthly

Google Sheets slows down noticeably above 50,000 rows. Move worked leads to an Archive tab monthly. Use a filter view to see only "New" status rows so your team is never scrolling through stale data.

Use named ranges for column references

Define named ranges for your key columns (e.g. "EmailColumn" = Sheet1!B:B). Reference these in your COUNTIF deduplication formulas. If you ever reorder columns, updating the named range fixes all formulas at once.

Other ScraperCity Data Sources for Your Sheet

Any ScraperCity scraper can feed a Google Sheet. Here are the most commonly used sources beyond Apollo, and what fields they add to your spreadsheet.

ScraperCostDeliveryKey fields added to sheet
Google Maps$0.01/place5-30 minplace_name, address, phone, website, rating, review_count
Email Finder$0.05/contact1-10 minverified email from name + company domain
Email Validator$0.0036/email1-10 mindeliverable, catch_all, mx_valid flags
Mobile Finder$0.25/input1-5 mindirect phone number from LinkedIn or email
Website Finderper domain5-15 mincontact email, phone, social links from domain
Yelp$0.01/listing5-15 minbusiness name, category, phone, address, rating
Store Leads$0.0039/leadinstantstore_name, platform, monthly_revenue, contact_email
Lead Database$649/mo planinstant3M+ B2B contacts with all Apollo fields, instant query

FAQ

Get API access to ScraperCity: