In this 'Web Scraping with Python' repo, we have covered the following usecases:
- Web Scraping using Selenium PyUnit
- Web Scraping using Selenium Pytest
- Web Scraping of dynamic website using Beautiful Soup and Selenium
The following websites are used for the purpose of demoing web scraping:
As mentioned online, scraping public web data from YouTube is legal as long as you don't go after information that is not available to the general public. However, there might be cases where the YouTube scraping might throw errors (or exceptions) when scraping is done on the Cloud Selenium Grid.
Step 1
Create a virtual environment by triggering the virtualenv venv command on the terminal
virtualenv venv
Step 2
Navigate the newly created virtual environment by triggering the source venv/bin/activate command on the terminal
source venv/bin/activate
Follow steps(3) and (4) for performing web scraping on LambdaTest Cloud Grid:
Step 3
Procure the LambdaTest User Name and Access Key by navigating to LambdaTest Account Page. You might need to create an an account on LambdaTest since it is used for running tests (or scraping) on the cloud Grid.
Step 4
Add the LambdaTest User Name and Access Key in the Makefile that is located in the parent directory. Once done, save the Makefile.
Run the make install command on the terminal to install the desired packages (or dependencies) - Pytest, Selenium, Beautiful Soup, etc.
make install
With this, all the dependencies and environment variables are set. We are all set for web scraping with the desired frameworks (i.e. Pyunit, Pytest, and Beautiful Soup)
The following websites are used for demonstration:
Follow the below mentioned steps to perform scraping on local machine:
Step 1
Set EXEC_PLATFORM environment variable to local. Trigger the command export EXEC_PLATFORM=local on the terminal.
Step 2
Trigger the command make clean to clean the remove pycache folder(s) and .pyc files
Step 3
The Chrome browser is invoked in the Headless Mode. It is recommended to install Chrome on your machine before you proceed to Step(3)
Step 4
Trigger the make scrap-using-pyunit command on the terminal to scrap content from the above mentioned websites
As seen above, the content from LambdaTest YouTube channel and LambdaTest e-commerce playground are scrapped successfully!
The following websites are used for demonstration:
Follow the below mentioned steps to perform scraping on local machine:
Step 1
Set EXEC_PLATFORM environment variable to local. Trigger the command export EXEC_PLATFORM=local on the terminal.
Step 2
The Chrome browser is invoked in the Headless Mode. It is recommended to install Chrome on your machine before you proceed to Step(4)
Step 3
Trigger the command make clean to clean the remove pycache folder(s) and .pyc files
Step 4
Trigger the make scrap-using-pytest command on the terminal to scrap content from the above mentioned websites
Beautiful Soup is a Python library that is majorly used for screen-scraping (or web scraping). More information about the library is available on Beautiful Soup HomePage
The Beautiful Soup (bs4) library is already installed as a part of pre-requisite steps. Hence, it is safe to proceed with the scraping with Beautiful Soup. Scraping Club Infinite Scroll Website has infinite scrolling pages and Selenium is used to scroll to the end of the page so that all the items on the page can be scraped using the said libraries.
The following websites are used for demonstration:
Follow the below mentioned steps to perform web scraping using Beautiful Soup(bs4):
Step 1
Set EXEC_PLATFORM environment variable to local. Trigger the command export EXEC_PLATFORM=local on the terminal.
Step 2
Trigger the make scrap-using-beautiful-soup command on the terminal to scrap content from the above mentioned websites
As seen from the above screenshots, content on Pages (1) thru' (5) on LambdaTest E-Commerce Playground are successfully displayed on the console.
Also, all the 60 items on Scraping Club Infinite Scroll Website are scraped without any issues.
Note: As mentioned earlier, there could be cases where YouTube Scraping might fail on cloud grid (particularly when there are a number of attempts to scrape the content). Since cookies and other settings are cleared (or sanitized) after every test session, YouTube might take genuine web scraping as a Bot Attack! In such cases, you might across the following page where cookie consent has to be given by clicking on "Accept all" button.
You can find more information about this insightful Stack Overflow thread
Since we are using LambdaTest Selenium Grid for test execution, it is recommended to create an acccount on LambdaTest before proceeding with the test execution. Procure the LambdaTest User Name and Access Key by navigating to LambdaTest Account Page.
The following websites are used for demonstration:
Follow the below mentioned steps to perform scraping on LambdaTest cloud grid:
Step 1
Set EXEC_PLATFORM environment variable to cloud. Trigger the command export EXEC_PLATFORM=cloud on the terminal.
Step 2
Trigger the command make clean to clean the remove pycache folder(s) and .pyc files
Step 3
Trigger the make scrap-using-pyunit command on the terminal to scrap content from the above mentioned websites
As seen above, the content from LambdaTest YouTube channel and LambdaTest e-commerce playground are scrapped successfully! You can find the status of test execution in the LambdaTest Automation Dashboard.
As seen above, the status of test execution is "Completed". Since the browser is instantiated in the Headless mode, the video recording is not available on the dashboard.
The following websites are used for demonstration:
Follow the below mentioned steps to perform scraping on LambdaTest cloud grid:
Step 1
Set EXEC_PLATFORM environment variable to cloud. Trigger the command export EXEC_PLATFORM=cloud on the terminal.
Step 2
Trigger the command make clean to clean the remove pycache folder(s) and .pyc files
Step 3
Trigger the make scrap-using-pytest command on the terminal to scrap content from the above mentioned websites
As seen above, the content from LambdaTest YouTube channel and LambdaTest e-commerce playground are scrapped successfully! You can find the status of test execution in the LambdaTest Automation Dashboard.
As seen above, the status of test execution is "Completed". Since the browser is instantiated in the Headless mode, the video recording is not available on the dashboard.
Feel free to fork the repo and contribute to make it better! Email to himanshu[dot]sheth[at]gmail[dot]com for any queries or ping me on the following social media sites:
LinkedIn: @hjsblogger
Twitter: @hjsblogger