License | Version | ||
Github Actions | Coverage | ||
Supported versions | Wheel | ||
Status | Downloads |
Dude is a very simple framework for writing a web scraper using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-learn syntax.
🚨 Dude is currently in Pre-Alpha. Please expect breaking changes.
To install, simply run the following from terminal.
pip install pydude
playwright install # Install playwright binaries for Chrome, Firefox and Webkit.
The simplest web scraper will look like this:
from dude import select
@select(css="a")
def get_link(element):
return {"url": element.get_attribute("href")}
The example above will get all the hyperlink elements in a page and calls the handler function get_link()
for each element.
You can run your scraper from terminal/shell/command-line by supplying URLs, the output filename of your choice and the paths to your python scripts to dude scrape
command.
dude scrape --url "<url>" --output data.json path/to/script.py
- Simple Flask-inspired design - build a scraper with decorators.
- Uses Playwright API - run your scraper in Chrome, Firefox and Webkit and leverage Playwright's powerful selector engine supporting CSS, XPath, text, regex, etc.
- Data grouping - group related results.
- URL pattern matching - run functions on specific URLs.
- Priority - reorder functions based on priority.
- Setup function - enable setup steps (clicking dialogs or login).
- Navigate function - enable navigation steps to move to other pages.
- Custom storage - option to save data to other formats or database.
- Async support - write async handlers.
- Option to use other parser backends aside from Playwright.
- BeautifulSoup4 -
pip install pydude[bs4]
- Parsel -
pip install pydude[parsel]
- lxml -
pip install pydude[lxml]
- Pyppeteer -
pip install pydude[pyppeteer]
- Selenium -
pip install pydude[selenium]
- BeautifulSoup4 -
By default, Dude uses Playwright but gives you an option to use parser backends that you are familiar with. It is possible to use parser backends like BeautifulSoup4, Parsel, lxml, Pyppeteer, and Selenium.
Here is the summary of features supported by each parser backend.
Parser Backend | Supports Sync? |
Supports Async? |
Selectors | Setup Handler |
Navigate Handler |
|||
CSS | XPath | Text | Regex | |||||
Playwright | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
BeautifulSoup4 | ✅ | ✅ | ✅ | 🚫 | 🚫 | 🚫 | 🚫 | 🚫 |
Parsel | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🚫 | 🚫 |
lxml | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🚫 | 🚫 |
Pyppeteer | 🚫 | ✅ | ✅ | ✅ | ✅ | 🚫 | ✅ | ✅ |
Selenium | ✅ | ✅ | ✅ | ✅ | ✅ | 🚫 | ✅ | ✅ |
Read the complete documentation at https://roniemartinez.github.io/dude/. All the advanced and useful features are documented there.
This project is at a very early stage. This dude needs some love! ❤️
Contribute to this project by feature requests, idea discussions, reporting bugs, opening pull requests, or through Github Sponsors. Your help is highly appreciated.
- ✅ Any dude should know how to work with selectors (CSS or XPath).
- ✅ This library was built on top of Playwright. Any dude should be at least familiar with the basics of Playwright - they also extended the selectors to support text, regular expressions, etc. See Selectors | Playwright Python.
- ✅ Python decorators... you'll live, dude!
- ✅ A Recursive acronym looks nice.
- ✅ Adding "uncomplicated" (like
ufw
) into the name says it is a very simple framework. - ✅ Puns! I also think that if you want to do web scraping, there's probably some random dude around the corner who can make it very easy for you to start with it. 😊