A website is provieded in the code. The crawler goes through the source code and gathers all URL's inside. The crawler then visits each url in another for loop and gather all child url's from the initial parent urls. And we write the child urls to a file called depth_1.
forked from arfanrashid/Web-Crawler-python
-
Notifications
You must be signed in to change notification settings - Fork 0
JonathanLindon/Web-Crawler-python
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published
Languages
- Python 100.0%