Don't fear, it's just a web spider ;-)
Introduction
Today, while looking through some older code, I came across a set of classes I wrote at the beginning of this year for a customer project.
The classes implement a basic web spider (also called "web robot" or "web crawler") to grab web pages (including resources like images and CSS), download them locally and adjust any resource hyperlinks to point to the locally downloaded resources.
While this article is not a full-featured article with detailed explanations as I usually like to write, I still want to put the code online with this short article. Maybe some reader can still take some ideas from this code and use it as a starting point for his own project.
Overview
The classes allow for synchronous as well as asynchronous download of the web pages, allowing multiple options to be specified like hyperlink-depth to follow and proxy settings.
The downloaded resources get their own new file names, based on the hash code of the original URL. I did this for simplifications (for me as the programmer).
To parse a document, I am using the SGMLReader DLL from the GotDotNet website.
Also, since I didn't need it for the project I wrote, the library does not care about "robots.txt" or throttling or other features.
Using the Code
The download for this article contains the library ("WebSpider
") and a testing console application ("WebSpiderTest
"). The testing application is rather short and should be rather easy to understand.
Basically, you do create an instance of the WebSiteDownloaderOptions
class, configure several parameters, create an instance of the WebSiteDownloader
class, optionally connect event handlers and then tell the instance to either start synchronously or asynchronously processing the given URL.
History
- 2007-09-17: Fixed several issues
- 2006-09-10: Initial release of the article