Submit URLs for indexing using Google's free Indexing API (200 requests/day).
Runs as an interactive tool or a silent daily service with auto-resume.
| File | Description |
|---|---|
bulk_indexer.py |
Main script |
credentials.txt |
Your API credentials (fill this!) |
urls_to_index.txt |
Your URL list — one per line |
run_daily_indexer.bat |
Windows batch file for Task Scheduler |
Auto-created after first run:
| File | Description |
|---|---|
indexing_master.xlsx |
Master tracking spreadsheet (5 sheets) |
indexing_log.txt |
Rolling log of last 200 requests |
indexer_state.json |
Quota counter, resume position, run history |
git clone https://github.com/YOUR_USER/bulk-indexer.git
cd bulk-indexerservice_account_json=my-service-account.json
site_url=https://www.yoursite.com/
https://www.yoursite.com/page-1
https://www.yoursite.com/page-2
https://www.yoursite.com/page-3
python bulk_indexer.pyDependencies install automatically on first run. No
pip installneeded.
You need a Service Account JSON key with the Indexing API enabled.
Google Cloud Console
└─ Select your project
└─ APIs & Services → Library
└─ Search "Web Search Indexing API" → Enable
Google Cloud Console
└─ IAM & Admin → Service Accounts
└─ Click your service account (or create one)
└─ Keys tab → Add Key → Create new key → JSON
└─ 📥 File downloads — save it in this folder
Google Search Console (search.google.com/search-console)
└─ Settings → Users and permissions
└─ Add user → paste service account email → Owner
The service account email is inside the JSON file (
client_emailfield).
| Command | Mode | Description |
|---|---|---|
python bulk_indexer.py |
Interactive | Full menu in CMD — choose file, action, see stats |
python bulk_indexer.py --auto |
Silent | No interaction — picks up where it left off |
python bulk_indexer.py --status |
Status | Shows quota, history, progress |
python bulk_indexer.py --log |
Log | Prints last 200 indexed URLs |
python bulk_indexer.py --help |
Help | Usage info |
GOOGLE INDEXING API - BULK URL INDEXER
Free: 200 requests/day | Batch mode | Auto-resume
======================================================================
Site: https://www.yoursite.com/
Credentials: my-service-account.json
Quota today: 0/200 used | 200 remaining
All-time: 1,450 indexed | 12 runs
MENU:
[1] Index URLs from file (TXT / CSV / XLSX)
[2] Index URLs from sitemap
[3] Retry failed URLs
[4] View status & history
[5] View last 200 log entries
[6] Reset progress (start file from beginning)
[0] Exit
indexing_master.xlsx contains 5 sheets:
| Sheet | Content |
|---|---|
| All Requests | Full history of every request sent |
| Daily Summary | Success/fail counts per day |
| URL Status (Latest) | Most recent status for each unique URL |
| Failed (Retry) | URLs that failed & haven't succeeded yet |
| Stats | Total counts, quota usage, run count |
Run 200 URLs/day automatically with zero interaction.
- Open Task Scheduler (
taskschd.msc) - Click Create Basic Task
- Set trigger: Daily at your preferred time (e.g., 9:00 AM)
- Action: Start a program
- Program:
python - Arguments:
"C:\path\to\bulk_indexer.py" --auto - Start in:
C:\path\to\
- Program:
- Finish
Or simply schedule run_daily_indexer.bat.
Day 1: URLs 1-200 ✅ (quota used: 200/200)
Day 2: URLs 201-400 ✅ (resumes automatically)
Day 3: URLs 401-600 ✅
...until all URLs are done
| Format | How it works |
|---|---|
.txt |
One URL per line. Lines starting with # are ignored |
.csv |
Auto-detects the column containing URLs |
.xlsx |
Auto-detects the column containing URLs |
| Sitemap | Fetches URLs from XML sitemap (supports sitemap index) |
| Tier | Limit | Cost |
|---|---|---|
| Free | 200 publish requests/day/project | $0 |
| Extended | Request via Google form | May need billing account |
- Quota resets daily at midnight (calendar day)
- Both
URL_UPDATEDandURL_DELETEDcount toward the 200 limit - Batch requests still count per-URL (40 URLs in 1 batch = 40 quota used)
- Script tracks quota automatically and stops when limit is reached
Each Google Cloud project gets its own 200/day quota. One account can have up to 12 projects = 2,400 requests/day.
Installed automatically on first run:
pandasopenpyxlrequestsgoogle-authgoogle-api-python-clientgoogle-auth-httplib2
Python 3.7+ required.
- The Indexing API is officially intended for pages with
JobPostingorBroadcastEventstructured data, butURL_UPDATEDnotifications work for general pages as a crawl signal - Submitting a URL does not guarantee indexing — Google still decides based on content quality
- Do not submit the same URL repeatedly — it won't help and wastes quota
- The script deduplicates failed URLs against successful ones when retrying
MIT — use it however you want.