PhantomCrawler allows users to simulate website interactions through different proxy IP addresses. It leverages Python, requests, and BeautifulSoup to offer a simple and effective way to test website behaviour under varied proxy configurations.
Features:
- Utilizes a list of proxy IP addresses from a specified file.
- Supports both HTTP and HTTPS proxies.
- Allows users to input the target website URL, proxy file path, and a static port.
- Makes HTTP requests to the specified website using each proxy.
- Parses HTML content to extract and visit links on the webpage.
Usage:
- POC Testing: Simulate website interactions to assess functionality under different proxy setups.
- Web Traffic Increase: Boost website hits by generating requests from multiple proxy IPs.
- Proxy Rotation Testing: Evaluate the effectiveness of rotating proxy IPs.
- Web Scraping Testing: Assess web scraping tasks under different proxy configurations.
- DDoS Awareness: Caution: The tool has the potential for misuse as a DDoS tool. Ensure responsible and ethical use.
proxies.txt
in this format 50.168.163.176:80
- You can add it from here: https://free-proxy-list.net/ these free proxies are not validated some might not work so first validate these proxies before adding.
How to Use:
- Clone the repository:
git clone https://github.com/spyboy-productions/PhantomCrawler.git
- Install dependencies:
pip3 install -r requirements.txt
- Run the script:
python3 PhantomCrawler.py
Disclaimer: PhantomCrawler is intended for educational and testing purposes only. Users are cautioned against any misuse, including potential DDoS activities. Always ensure compliance with the terms of service of websites being tested and adhere to ethical standards.