You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+103-2Lines changed: 103 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,22 @@
1
-
# javascript-web-scraping
2
-
JavaScript Web Scraping
1
+
# JavaScript Web Scraping
3
2
3
+
## Required software
4
+
There are only two pieces of software that will be needed:
5
+
6
+
1. Node.js (which comes with npm—the package manager for Node.js)
7
+
2. Any code editor
8
+
9
+
## Set up Node.js project
10
+
Before writing any code to web scrape using node.js, create a folder where JavaScript files will be stored. These files will contain all the code required for web scraping.
11
+
12
+
Once the folder is created, navigate to this folder and run the initialization command:
4
13
5
14
```bash
6
15
npm init -y
7
16
```
8
17
18
+
## Installing Node.js packages
19
+
9
20
```bash
10
21
npm install axios
11
22
```
@@ -14,27 +25,54 @@ npm install axios
14
25
npm install axios cheerio json2csv
15
26
```
16
27
28
+
## JavaScript web scraping – a practical example
29
+
30
+
One of the most common scenarios of web scraping with JavaScript is to scrape e-commerce stores. A good place to start is a fictional book store http://books.toscrape.com/. This site is very much like a real store, except that this is fictional and is made to learn web scraping.
31
+
32
+
### Creating selectors
33
+
The first step before beginning JavaScript web scraping is creating selectors. The purpose of selectors is to identify the specific element to be queried.
34
+
35
+
Begin by opening the URL http://books.toscrape.com/catalogue/category/books/mystery_3/index.html in Chrome or Firefox. Once the page loads, right-click on the title of the genre, Mystery, and select Inspect. This should open the Developer Tools with `<h1>Mystery</h1>` selected in the Elements tab.
The simplest way to create a selector is to right-click this `h1` tag in the Developer Tools, point to Copy, and then click Copy Selector. This will create a selector like this:
This selector is valid and works well. The only problem is that this method creates a long selector. This makes it difficult to understand and maintain the code.
46
+
47
+
After spending some time with the page, it becomes clear that there is only one h1 tag on the page. This makes it very easy to create a very short selector:
48
+
21
49
```css
22
50
h1
23
51
```
24
52
53
+
## Scraping the genre
54
+
The first step is to define the constants that will hold a reference to Axios and Cheerio.
55
+
25
56
```javascript
26
57
const cheerio = require("cheerio");
27
58
const axios = require("axios");
28
59
```
29
60
61
+
The address of the page that is being scraped is saved in the variable URL for readability
This particular site does not need any special header, which makes it easier to learn.
86
+
87
+
Axios supports both the Promise pattern and the async-await pattern. This tutorial focuses on the async-await pattern. The response has a few attributes like headers, data, etc. The HTML that we want is in the data attribute. This HTML can be loaded into an object that can be queried, using cheerio.load() method.
88
+
47
89
```javascript
48
90
const$=cheerio.load(response.data);
49
91
```
50
92
93
+
Cheerio’s `load()` method returns a reference to the document, which can be stored in a constant. This can have any name. To make our code look and feel more like jQuery web scraping code, a $ can be used instead of a name.
94
+
95
+
Finding this specific element within the document is as easy as writing . In this particular case, it would be .
96
+
97
+
The method `text()` will be used everywhere when writing web scraping code with JavaScript, as it can be used to get the text inside any element. This can be extracted and saved in a local variable.
98
+
51
99
```javascript
52
100
constgenre=$("h1").text();
53
101
```
54
102
103
+
Finally, `console.log()` will simply print the variable value on the console.
104
+
55
105
```javascript
56
106
console.log(genre);
57
107
```
58
108
109
+
To handle errors, the code will be surrounded by a try-catch block. Note that it is a good practice to use console.error for errors and console.log for other messages.
110
+
111
+
Here is the complete code put together. Save it as genre.js in the folder created earlier, where the command npm init was run.
112
+
59
113
```javascript
60
114
constcheerio=require("cheerio");
61
115
constaxios=require("axios");
@@ -74,14 +128,27 @@ async function getGenre() {
74
128
getGenre();
75
129
```
76
130
131
+
The final step to run this web scraping in JavaScript is to run it using Node.js. Open the terminal and run this command:
132
+
77
133
```javascript
78
134
node genre.js
79
135
```
80
136
137
+
The output of this code is going to be the genre name:
138
+
81
139
```javascript
82
140
Mystery
83
141
```
84
142
143
+
Congratulations! This was the first program that uses JavaScript and Node.js for web scraping. Time to do more complex things!
144
+
145
+
## Scraping book listings
146
+
Let’s try scraping listings. Here is the same page that has a book listing of the Mystery genre – http://books.toscrape.com/catalogue/category/books/mystery_3/index.html
147
+
148
+
First step is to analyze the page and understand the HTML structure. Load this page in Chrome, press F12, and examine the elements.
149
+
150
+
Each book is wrapped in `<article>` tag. It means that all these books can be extracted and a loop can be run to extract individual book details. If the HTML is parsed with Cheerio, jQuery function `each()` can be used to run a loop. Let’s start with extracting title of all the books. Here is the code:
151
+
85
152
```javascript
86
153
constbooks=$("article"); //Selector to get all books
87
154
books.each(function ()
@@ -91,6 +158,10 @@ books.each(function ()
91
158
});
92
159
```
93
160
161
+
As it is evident from the above code that the extracted details need to be saved somewhere else inside the loop. The best idea would be to store these values in an array. In fact, other attributes of the books can be extracted and stored as a JSON in an array.
162
+
163
+
Here is the complete code. Create a new file, paste this code and save it as books.js in the same folder that where npm init was run:
164
+
94
165
```javascript
95
166
constcheerio=require("cheerio");
96
167
constaxios=require("axios");
@@ -116,39 +187,67 @@ async function getBooks(url) {
116
187
getBooks(mystery);
117
188
```
118
189
190
+
Run this file using Node.js from the terminal:
191
+
119
192
```bash
120
193
node books.js
121
194
```
122
195
196
+
This should print the array of books on the console. The only limitation of this JavaScript code is that it is scraping only one page. The next section will cover how pagination can be handled.
197
+
198
+
## Handling pagination
199
+
200
+
The listings like this are usually spread over multiple pages. While every site may have its own way of paginating, the most common one is having a next button on every page. The exception is the last, which will not have a next page link.
201
+
202
+
The pagination logic for these situations is rather simple. Create a selector for the next page link. If the selector results in a value, take the href attribute value and call `getBooks` function with this new URL recursively.
203
+
204
+
Immediate after the `books.each()` loop, add these lines:
205
+
123
206
```javascript
124
207
if ($(".next a").length>0) {
125
208
next_page = baseUrl +$(".next a").attr("href"); //converting to absolute URL
126
209
getBooks(next_page); //recursive call to the same function with new URL
127
210
}
128
211
```
129
212
213
+
Note that the href returned above is a relative URL. To convert it into an absolute URL, the simplest way is to concatenate a fixed part to it. This fixed part of the URL is stored in the baseUrl variable
Once the scraper reaches the last page, the Next button will not be there and the recursive call will stop. At this point, the array will have book information from all the pages. The final step of web scraping with Node.js is to save the data.
221
+
222
+
## Saving scraped data to CSV
223
+
If web scraping with JavaScript is easy, saving data into a CSV file is even easier. It can be done using these two packages —fs and json2csv. The file system is represented by the package fs, which is in-built. json2csv would need to be installed using npm install json2csv command
224
+
134
225
```bash
135
226
npm install json2csv
136
227
```
137
228
229
+
after the installation, create a constant that will store this package’s Parser.
230
+
138
231
```javascript
139
232
constj2cp=require("json2csv").Parser;
140
233
```
141
234
235
+
The access to the file system is needed to write the file on disk. For this, initialize the `fs` package.
236
+
142
237
```javascript
143
238
constfs=require("fs");
144
239
```
145
240
241
+
Find the line in the code where an array with all the scraped is available, and then insert the following lines of code to create the CSV file.
242
+
146
243
```javascript
147
244
constparser=newj2cp();
148
245
constcsv=parser.parse(books_data); // json to CSV in memory
149
246
fs.writeFileSync("./books.csv", csv); // CSV is now written to disk
150
247
```
151
248
249
+
Here is the complete script put together. This can be saved as a .js file in the node.js project folder. Once it is run using node command on terminal, data from all the pages will be available in books.csv file.
250
+
152
251
```javascript
153
252
constfs=require("fs");
154
253
constj2cp=require("json2csv").Parser;
@@ -189,6 +288,8 @@ async function getBooks(url) {
0 commit comments