Extract domains/subdomains/FQDNs from files and URLs
git clone https://github.com/intrudir/domainExtractor.git
python3 domainExtractor.py
usage: domainExtractor.py [-h] [-f INPUTFILE] [-u URL] [-t TARGET] [-v]
This script will extract domains from the file you specify and add it to a final file
optional arguments:
-h, --help show this help message and exit
-f INPUTFILE, --file INPUTFILE
Specify the file to extract domains from
-u URL, --url URL Specify the web page to extract domains from. One at a time for now
-t TARGET, --target TARGET
Specify the target top-level domain you'd like to find and extract e.g. uber.com
-v, --verbose Enable slightly more verbose console output
python3 domainExtractor.py -f ~/Desktop/yahoo/test/test.html -t yahoo.com
It will extract, sort and dedup all domains that are found.
You can specify multiple files using commas (no spaces)
python3 domainExtractor.py -f amass.playstation.net.txt,subfinder.playstation.net.txt --target playstation.net
python3 domainExtractor.py -u "https://yahoo.com" -t yahoo.com
# pulling from a file, extract all domains
python3 domainExtractor.py -f test.html --target all
# pull from yahoo.com home page, extract all domains. No target specified defaults to 'all'
python3 domainExtractor.py -u "https://yahoo.com"
1) if you already have a final file for it it will notify you of domains you didnt have before
2) It will append them to the final file
3) It will log the new domain to logs/newdomains.{target}.txt with date & time found
This allows you to check the same target across multiple files and be notified of any new domains found!
I first use it against my Amass results, then against my Subfinder results.
The script will sort and dedup as well as notify me of how many new, unique domains came from subfinder's results.
It will add them to the final file and log just the new ones to logs/newdomains.{target}.txt