Create Your Own Web Scraper Using node.js
Want to make you own scraper to scrape any data form any website and return it in JSON format so you can used it anywhere you like? If yes, then you are in the right place.
In this tip, I will guide you how to scrape any website to get the desired data using node.js and to obtain the data in JSON format which can be used, e.g., make any app which will run on live data from the internet.
I will be using Windows 10 x64 and VS 2015 for this tip and will scrape from a news website, i.e.:
- First of all, set up the IDE, go to https://nodejs.org/en/download/ and download the node.js pre build installer. For me, it will be Windows installer 64-bit.
- After installing it, open your Visual Studio and create a new project Templates>JavaScript>Node.js>Basic Node.js Express 4 Application.
- Now I have to add two packages in npm folder, i.e. ‘
Request
’ and ‘Cheerio
’.
- And uninstall ‘jade’ by doing right click as we don’t need it now and I have to host my json to Azure cloud service so jade gives an exception. If you want to consume json directly in your application or hosting using other service, then you don’t have to uninstall jade.
- Now go to app.js and comment out the line numbers 14 and 15 as we are not using ‘Views’
- Also comment out ‘
app.use('/', routes);
’ - Change
app.use('/users', users);
to app.use('/', users);
- Now go to users.js as now we will do the main thing here. First of all, add the files ‘cheerio’ and ‘request’.
- Create a variable to save the url of the link:
var url = "http://www.thenews.com.pk/CitySubIndex.aspx?ID=14";
- Modify the
router.get()
function as follows:
router.get ('/', function (req, res) {
request (url, function (error, response, body) {
if (!error && response.statusCode === 200) {
var data = scrapeDataFromHtml(body);
res.send(data);
}
return console.log(error);
});
});
- Here comes the main and difficult part. Now, we have to write the main logic of scraping our website. You have to customize your function according to your website and the data you want to fetch. Let’s open the website in browser and develop the logic for it.
- I want to scrape out the following data, news headline, its description and the link to open the detail of the news. This data is changed dynamically and want to fetch the latest data.
- To fetch this data, I have to study its DOM so I can write its jQuery to fetch it easily.
- I made a DOM tree so I can the write the logic to traverse it easily.
- The text in red are the nodes I have to reach in a loop to access the data from the website.
- I will write a function named as
scrapedatafromthtml
as follows:
var scrapeDataFromHtml = function (html) {
var data = {};
var $ = cheerio.load(html);
var j = 1;
$('div.DetailPageIndexBelowContainer').each(function () {
var a = $(this);
var fullNewsLink = a.children().children().attr("href");
var headline = a.children().first().text().trim();
var description = a.children().children().children().last().text();
var metadata = {
headline: headline,
description: description,
fullNewsLink : fullNewsLink
};
data[j] = metadata;
j++;
});
return data;
};
- This function will reach the ‘
div
’ using the class ‘.DetailPageIndexBelowContainer
’ and will iterate its DOM to fetch the ‘fullNewsLink
’, ‘headline
’ and ‘description
’. Then, it will add these values in the array called ‘metadata’
. I have another array called ‘data
’ and will come the values from metadata on each iteration so in the end I can return my ‘data
’ array as JSON. If you only want one thing form a website, you don’t need to have loop for it or to create you other array. You can directly access them by traversing it and return the single array. - Now run it and check the output.
- And yes! It’s runs perfectly and returns the required data in JSON format.
- PS: If the site that i am using as an example, removes the page, changes the layout, changes the css files or their names etc then we would not get the desired result. For that you have to write the new logic. but i have explained the logic and how to traverse the DOM tree of any website.