A powerful Python-based web scraper that extracts business information from Google Maps search results for any search term. Perfect for market research, lead generation, and business intelligence.
- Universal Search: Search for any type of business (restaurants, hotels, shops, services, etc.)
- Comprehensive Data Extraction: Name, phone number, website, address, and Google Maps URL
- Multi-Language Support: Forces English interface for consistent data extraction
- Email Extraction: Automatically extracts email addresses from business websites
- Batch Processing: Processes businesses in batches with configurable delays
- Progress Tracking: Real-time progress updates and countdown timers
- Organized Output: Creates separate folders for each search term
- Resume Capability: Continues from where it left off if interrupted
- Headless Mode: Run in background without opening browser windows
- Rate Limiting: Built-in delays to avoid being blocked
- Download the executable from the link above
- Run
GoogleMapsScraper.exe - Start scraping - no installation required!
- Python 3.7 or higher
- Google Chrome browser
- Internet connection
-
Clone or download the repository
git clone <repository-url> cd google-map-scraper
-
Install dependencies
python scripts/install_dependencies.py
Or manually:
pip install -r requirements.txt
-
Run the scraper
python run.py
- Windows 10/11 (64-bit)
- Google Chrome browser
- Internet connection
The scraper provides an intuitive menu-driven interface:
π GOOGLE MAPS UNIVERSAL SCRAPER - CONTROL CENTER
======================================================================
π MAIN MENU
======================================================================
1. π Run Scraper
2. π§ Extract Emails from Websites
3. π§Ή Clear Output Files
4. π¦ Install Dependencies
5. π§ Fix/Test WebDriver
6. β Exit
======================================================================
- Select option 1 from the main menu
- Enter your search term (e.g., "restaurants in Paris", "hotels in Tokyo", "coffee shops in London")
- Configure settings:
- Headless mode (recommended for faster execution)
- Batch delay (default: 30 seconds between batches)
- Start scraping and follow the interactive prompts
restaurants in New Yorkhotels in Tokyo Japancoffee shops near medentists in Londongyms in Berlinpharmacies in Parisbookstores in Amsterdam
The scraper generates CSV files with the following columns:
| Column | Description |
|---|---|
name |
Business name |
phone |
Phone number |
website |
Website URL |
address |
Full address |
google-map-url |
Google Maps listing URL |
scraped_at |
Timestamp of extraction |
output/
βββ restaurants_in_paris/
β βββ batch_1_businesses.csv
β βββ batch_2_businesses.csv
β βββ final_results.csv
βββ hotels_in_tokyo/
β βββ batch_1_businesses.csv
β βββ final_results.csv
βββ ...
The built-in email scraper automatically extracts email addresses from business websites:
- Select option 2 from the main menu
- Choose a search folder from your previous scraping sessions
- Select a CSV file to process
- Confirm extraction and wait for results
- Standard email format:
user@domain.com - Spaced format:
user @ domain . com - Obfuscated format:
user[at]domain[dot]com - Parentheses format:
user(at)domain(dot)com
- Headless Mode: Run browser in background (faster, no GUI)
- Batch Delay: Time between batches (0-300 seconds)
- Max Businesses: Limit total businesses to scrape
- Request Delay: Delay between website requests for email extraction
The scraper automatically configures Chrome with optimal settings:
- Disables images and unnecessary features for faster loading
- Forces English language interface
- Suppresses logs and errors
- Optimized for automation
from scraper import GoogleMapsScraper
# Initialize scraper
scraper = GoogleMapsScraper(headless=True, batch_delay=30)
# Start scraping
success = scraper.scrape_all("restaurants in Paris", max_businesses=100)
if success:
print(f"Scraped {len(scraper.businesses_data)} businesses")
scraper.save_to_csv("my_results.csv")from email_scraper import EmailScraper
# Initialize email scraper
email_scraper = EmailScraper()
# Extract emails from a specific file
success = email_scraper.run_email_scraper("restaurants_in_paris", "final_results.csv")google-map-scraper/
βββ scraper.py # Main scraper class
βββ email_scraper.py # Email extraction functionality
βββ run.py # Interactive menu interface
βββ requirements.txt # Python dependencies
βββ README.md # This file
βββ output/ # Generated CSV files
β βββ search_term_1/
β βββ search_term_2/
βββ scripts/ # Utility scripts
βββ install_dependencies.py
βββ clear_output.py
βββ fix_webdriver.py
-
ChromeDriver Issues
python scripts/fix_webdriver.py
-
Missing Dependencies
python scripts/install_dependencies.py
-
Clear Output Files
python scripts/clear_output.py
- "No business links found": Try a different search term or wait longer
- "WebDriver setup failed": Check Chrome installation and run fix script
- "Timeout waiting for elements": Increase delays or check internet connection
- Use headless mode for faster execution
- Set appropriate batch delays (30+ seconds recommended)
- Monitor your scraping rate to avoid being blocked
- Use specific search terms for better results
- Clear output files regularly to save disk space
- Respect robots.txt: Check website policies before scraping
- Rate limiting: Built-in delays help avoid overwhelming servers
- Terms of service: Ensure compliance with Google Maps terms
- Data usage: Use scraped data responsibly and legally
- Google's anti-bot measures: May occasionally block automated requests
- Dynamic content: Some elements may not load consistently
- Rate limits: Excessive requests may result in temporary blocks
- Browser updates: Chrome updates may require WebDriver updates
Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is for educational and research purposes. Please ensure compliance with applicable laws and terms of service.
If you encounter issues:
- Check the troubleshooting section
- Run the fix scripts
- Check your internet connection
- Verify Chrome installation
- Create an issue with detailed error information
The scraper is regularly updated to handle:
- Google Maps interface changes
- Chrome browser updates
- New anti-bot measures
- Performance improvements
Happy Scraping! π
For questions or support, please open an issue in the repository.