Fast and easy to use content scraper for topic-centred web pages, e.g. blog posts, news and wikis.
The tool uses heuristics to extract main content and ignores surrounding noise. No processing rules. No XPath. No configuration.
pip install scrabscrab https://blog.postStore extracted content to a file:
scrab https://blog.post > content.txt- Support
<main>tag - Add support for lists
- Add support for scripts
- Add support for markdown output format
- Download and save referenced images
- Extract and embed links
# Lint with flake8
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
# Check with mypy
mypy ./scrab
mypy ./tests
# Run tests
pytestPublish to PyPI:
rm -rf dist/*
python setup.py sdist bdist_wheel
twine upload dist/*This project is licensed under the MIT License.