Skip to content

devramsean0/wikipedia-crawler

Repository files navigation

wikipedia-crawler

This is a robots.txt compliant crawler to try and find out how deep the network of links and pages that stem from "https://wikipedia.org/"

About

A robots.txt respecting web crawler to track how big the Wikipedia network of linked domains goes.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •