This is a robots.txt compliant crawler to try and find out how deep the network of links and pages that stem from "https://wikipedia.org/"
-
Notifications
You must be signed in to change notification settings - Fork 0
A robots.txt respecting web crawler to track how big the Wikipedia network of linked domains goes.
License
devramsean0/wikipedia-crawler
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
A robots.txt respecting web crawler to track how big the Wikipedia network of linked domains goes.
Topics
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published