Skip to content

this version is with some minor changes, the main change is on output files. V2 is ready to integrate with your recon process. unlike V1, which is prompting for inputs, this version can be run on a single command, also i removed identification from output files, so you will now get pure urls as a output, which you can use for your further recon.

License

Notifications You must be signed in to change notification settings

kartikhunt3r/Adrishya-Spider-V2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Adrishya-Spider-V2

this version is with some minor changes, the main change is on output files. V2 is ready to integrate with your recon process. unlike V1, which is prompting for inputs, this version will run on a single command, also i removed identification from output files, so you will now get pure urls as a output, which you can use for your further recon.

Logo

Platforms:

  • windows All windows distributions

  • Linux All linux distributions

Features:

  • Fastest web crawling
  • works on python asyncio, which makes it way more faster than other crawlers
  • zero duplicate links
  • Link Finder
  • JavaScript files Finder
  • Brute force and parse sitemap.xml
  • Parse robots.txt
  • Find AWS-S3 from response source
  • Get URLs from Wayback Machine
  • Format output easy to Grep
  • save output in txt
  • Support Burp input
  • Crawl multiple sites in parallel
  • option to include Third party urls to your results
  • Easy to use

📹 Demo Video :

Demo

Installation:

git clone https://github.com/kartikhunt3r/Adrishya-Spider-V2.git

cd Adrishya-Spider-V2

chmod +x *

pip install -r requirements.txt

Use:

python3 adrishya.py example.com

🔗 Links

portfolio

linkedin

twitter

youtube

About

this version is with some minor changes, the main change is on output files. V2 is ready to integrate with your recon process. unlike V1, which is prompting for inputs, this version can be run on a single command, also i removed identification from output files, so you will now get pure urls as a output, which you can use for your further recon.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages