CeWLeR crawls from a specified URL and collects words to create a custom wordlist.
It's a great tool for security testers and bug bounty hunters. The lists can be used for password cracking, subdomain enumeration, directory and file brute forcing, API endpoint discovery, etc. It's good to have an additional target specific wordlist that is different than what everybody else use.
CeWLeR was sort of originally inspired by the really nice tool CeWL. I had some challenges with CeWL on a site I wanted a wordlist from, but without any Ruby experience I didn't know how to contribute or work around it. So instead I created a custom wordlist generator in Python to get the job done.
- Generates custom wordlists by scraping words from web sites
- A lot of options:
- Output to screen or file
- Can stay within subdomain, or visit sibling and child subdomains, or visit anything within the same top domain
- Can stay within a certain depth of a website
- Speed can be controlled
- Word length and casing can be configured
- JavaScript and CSS can be included
- Text can be extracted from PDF files (using pypdf)
- Crawled URLs can be output to separate file
- Scraped e-mail addresses can also be output to separate file
- Custom HTTP headers can be added
- ++
- Using the excellent Scrapy framework for scraping and using the beautiful rich library for terminal output
Will output to screen unless a file is specified.
cewler --output wordlist.txt https://example.com
The rate is specified in requests per second. Depth controls link hops from the start URL (not URL path depth). Please play nicely and don't break any rules.
cewler --output wordlist.txt --rate 5 --depth 2 https://example.com
The default User-Agent is a common browser.
cewler --output wordlist.txt --user-agent "Cewler" https://example.com
It's possible to specify custom HTTP headers for the requests. Multiple headers can be specified.
cewler -H "X-Bounty: d14c14ec" https://httpbin.org/headers
Unless specified the words will have mixed case and be of at least 5 in length.
cewler --output wordlist.txt --lowercase --min-word-length 2 --without-numbers https://example.com
The default is to just visit exactly the same (sub)domain as specified.
cewler --output wordlist.txt -s all https://example.com
cewler --output wordlist.txt -s children https://example.com
If you want you can include links from <script> and <link> tags, plus words from within JavaScript and CSS.
cewler --output wordlist.txt --include-js --include-css https://example.com
It's easy to extract text from PDF files as well.
cewler --output wordlist.txt --include-pdf https://example.com
It's also possible to store the crawled files to a file.
cewler --output wordlist.txt --output-urls urls.txt https://example.com
It's also possible to store the scraped e-mail addresses to a separate file (they are always added to the wordlist).
cewler --output wordlist.txt --output-emails emails.txt https://example.com
You can specify a HTTP proxy.
cewler --proxy http://localhost:8080 https://example.com
If it just takes too long to crawl a site you can press ctrl + c once(!) and wait while the spider finishes the current requests and then whatever words have been found so far is stored to the output file.
cewler -h
usage: cewler [-h] [-d DEPTH] [-css] [-js] [-pdf] [-l] [-m MIN_WORD_LENGTH] [-o OUTPUT] [-oe OUTPUT_EMAILS]
[-ou OUTPUT_URLS] [-r RATE] [-s {all,children,exact}] [--stream] [-u USER_AGENT] [-H HEADER] [-p PROXY]
[-v] [-w]
url
CeWLeR - Custom Word List generator Redefined
positional arguments:
url URL to start crawling from
options:
-h, --help show this help message and exit
-d, --depth DEPTH max link hops from start URL, 0 for unlimited (default: 2)
-css, --include-css include CSS from external files and <style> tags
-js, --include-js include JavaScript from external files and <script> tags
-pdf, --include-pdf include text from PDF files
-l, --lowercase lowercase all parsed words
-m, --min-word-length MIN_WORD_LENGTH
minimum word length to include (default: 5)
-o, --output OUTPUT file were to stream and store wordlist instead of screen (default: screen)
-oe, --output-emails OUTPUT_EMAILS
file were to stream and store e-mail addresses found (they will always be outputted in the
wordlist)
-ou, --output-urls OUTPUT_URLS
file were to stream and store URLs visited (default: not outputted)
-r, --rate RATE requests per second (default: 20)
-s, --subdomain_strategy {all,children,exact}
allow crawling [all] domains, including children and siblings, only [exact] the same (sub)domain
(default), or same domain and any belonging [children]
--stream writes to file after each request (may produce duplicates because of threading) (default: false)
-u, --user-agent USER_AGENT
User-Agent header to send (default: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36)
-H, --header HEADER custom header in 'Name: Value' format (can be used multiple times, overrides -u if 'User-Agent'
is specified)
-p, --proxy PROXY proxy URL ([http(s)://[user:pass@]]host[:port])
-v, --verbose a bit more detailed output
-w, --without-numbers
ignore words that are numbers or contain numbers
Example URL to scan https://sub.example.com:
-s exact* |
-s children |
-s all |
|
|---|---|---|---|
sub.example.com |
β | β | β |
child.sub.example.com |
β | β | β |
sibling.example.com |
β | β | β |
example.com |
β | β | β |
| * Default strategy |
If you want to do some tweaking yourself you can probably find what you want in src/cewler/constants.py and src/cewler/spider.py
Package homepage: https://pypi.org/project/cewler/
python3 -m pip install cewler
python3 -m pip install cewler --upgrade
git clone https://github.com/roys/cewler.git --depth 1
cd cewler
This keeps dependencies isolated and avoids affecting your system Python.
python3 -m venv venv
source venv/bin/activate
python3 -m pip install -e .
This installs cewler and all its dependencies, creating the cewler command that you can run from anywhere (while the venv is active). Any changes you make to the source code will be immediately reflected when you run the command.
git pull
To run CeWLeR with docker you first build the docker container:
docker build . -t cewler
After the container finishes building you can run CeWLeR like this to store the output in the current folder:
docker run -v "$(pwd):/app" cewler --output /app/wordlist.txt --depth 1 https://blog.roysolberg.com
CeWLeR is pronounced "cooler".
A huge thank you to everyone who has contributed to making CeWLeR better! Your contributions, big and small, make a significant difference.
Contributions of any kind are welcome and recognized. From bug reports to coding, documentation to design, every effort is appreciated:
- Chris Dale - for testing, bug reporting and fixing
- Mathies Svarrer-LanthΓ©n - for adding support for PDF extraction
- webhak - for adding Docker support
