Introduction
What is a WebSpider
A WebSpider or crawler is an automated program that follows links on websites and calls a WebRobot to handle the contents of each link.
What is a WebRobot
A WebRobot is a program that processes the content found through a link, a WebRobot can be used for indexing a page or extracting useful information based on a predefined query, common examples are - Link checkers, e-mail address extractors, multimedia extractors and update watchers.
Background
I had a recent contract to build a web page link checker, this component had to be able to check links that were stored in a database as well as to check links on a website, both through the local file system and over the internet.
This article explains the WebRobot, the WebSpider and how to enhance the WebRobot through specialized content handlers, the code shown has some superfluous code such try blocks, variable initialization and minor methods removed.
Class overview
The classes that make up the WebRobot are; WebPageState
, which represents a URI and its current state in the process chain and an implementation of IWebPageProcessor
, which performs the actual reading of the URI, calling content handlers and dealing with page errors.
The WebSpider has only one class WebSpider
, this maintains a list of pending/processed URI's contained in a list of WebPageState
objects and runs WebPageProcessor
against each WebPageState
to extract links to other pages and to test whether the URI's are valid.
Using the code - WebRobot
Web page processing is handled by an object that implements IWebPageProcessor
. The Process
method expects to receive a WebPageState
, this will be updated during page processing and if all is successful the method will return true
. Any number of content handlers can be also be called after the page has been read, by assigning WebPageContentDelegate
delegates to the processor.
public delegate void WebPageContentDelegate( WebPageState state );
public interface IWebPageProcessor
{
bool Process( WebPageState state );
WebPageContentDelegate ContentHandler { get; set; }
}
The WebPageState
object holds state and content information for the URI being processed. All properties of this object are read/write accept for the URI which must be passed in through the constructor.
public class WebPageState
{
private WebPageState( ) {}
public WebPageState( Uri uri )
{
m_uri = uri;
}
public WebPageState( string uri )
: this( new Uri( uri ) ) { }
Uri m_uri;
string m_content;
string m_processInstructions = "";
bool m_processStarted = false;
bool m_processSuccessfull = false;
string m_statusCode;
string m_statusDescription;
}
The WebPageProcessor
is an implementation of the IWebPageProcessor
that does the actual work of reading in the content, handling error codes/exceptions and calling the content handlers. WebPageProcessor
may be replaced or extended to provide additional functionality, though adding a content handler is generally a better option.
public class WebPageProcessor : IWebPageProcessor
{
public bool Process( WebPageState state )
{
state.ProcessStarted = true;
state.ProcessSuccessfull = false;
WebRequest req = WebRequest.Create( state.Uri );
WebResponse res = null;
try
{
res = req.GetResponse( );
if ( res is HttpWebResponse )
{
state.StatusCode =
((HttpWebResponse)res).StatusCode.ToString( );
state.StatusDescription =
((HttpWebResponse)res).StatusDescription;
}
if ( res is FileWebResponse )
{
state.StatusCode = "OK";
state.StatusDescription = "OK";
}
if ( state.StatusCode.Equals( "OK" ) )
{
StreamReader sr = new StreamReader(
res.GetResponseStream( ) );
state.Content = sr.ReadToEnd( );
if ( ContentHandler != null )
{
ContentHandler( state );
}
}
state.ProcessSuccessfull = true;
}
catch( Exception ex )
{
HandleException( ex, state );
}
finally
{
if ( res != null )
{
res.Close( );
}
}
return state.ProcessSuccessfull;
}
}
private WebPageContentDelegate m_contentHandler = null;
public WebPageContentDelegate ContentHandler
{
get { return m_contentHandler; }
set { m_contentHandler = value; }
}
There are additonal private methods in the WebPageProcessor
to handle HTTP error codes and file not found errors when dealing with the "file://" scheme as well as more severe exceptions.
Using the code - WebSpider
The WebSpider
class is really just a harness for calling the WebRobot
in a particular way. It provides the robot with a specialized content handler for crawling through web links and maintains a list of both pending pages and already visited pages. The current WebSpider
is designed to start from a given URI and to limit full page processing to a base path.
public WebSpider(
string startUri
) : this ( startUri, -1 ) { }
public WebSpider(
string startUri,
int uriProcessedCountMax
) : this ( startUri, "", uriProcessedCountMax,
false, new WebPageProcessor( ) ) { }
public WebSpider(
string startUri,
string baseUri,
int uriProcessedCountMax
) : this ( startUri, baseUri, uriProcessedCountMax,
false, new WebPageProcessor( ) ) { }
public WebSpider(
string startUri,
string baseUri,
int uriProcessedCountMax,
bool keepWebContent,
IWebPageProcessor webPageProcessor )
{
}
Why is there a base path limit?
Since there are trillions of pages on the Internet, this spider will check all links that it finds to see if they are valid, but it will only add new links to the pending queue if those links belong within the context of the initial website or sub path of that website.
So if we are starting from www.myhost.com/index.html and this page has link to www.myhost.com/pageWithSomeLinks.html and www.google.com/pageWithManyLinks.html then the WebRobot
will be called against both links to check if they are valid but it will only add new links found within www.myhost.com/pageWithSomeLinks.html
Call the Execute
method to start the spider. This method will add the startUri to a Queue
of pending pages and then call the IWebPageProcessor until there are no pages left to process.
public void Execute( )
{
AddWebPage( StartUri, StartUri.AbsoluteUri );
while ( WebPagesPending.Count > 0 &&
( UriProcessedCountMax == -1 || UriProcessedCount
< UriProcessedCountMax ) )
{
WebPageState state = (WebPageState)m_webPagesPending.Dequeue( );
m_webPageProcessor.Process( state );
if ( ! KeepWebContent )
{
state.Content = null;
}
UriProcessedCount++;
}
}
A web page can only be added to the queue if the Uri "excluding anchor" points to a path or a valid page (e.g. .html, .aspx, .jsp etc...) and has not already been seen before.
private bool AddWebPage( Uri baseUri, string newUri )
{
Uri uri = new Uri( baseUri,
StrUtil.LeftIndexOf( newUri, "#" ) );
if ( ! ValidPage( uri.LocalPath ) || m_webPages.Contains( uri ) )
{
return false;
}
WebPageState state = new WebPageState( uri );
if ( uri.AbsoluteUri.StartsWith( BaseUri.AbsoluteUri ) )
{
state.ProcessInstructions += "Handle Links";
}
m_webPagesPending.Enqueue ( state );
m_webPages.Add ( uri, state );
return true;
}
Examples of running the spider
The following code shows three examples for calling the WebSpider
, the paths shown are examples only, they don't represent the true structure of this website. Note: the Bondi Beer website in the example, is a site that I built using my own SiteGenerator. This easy to use program produces static websites from dynamic content such as proprietary data files, XML / XSLT files, databases, RSS feeds and more...
WebSpider spider = new WebSpider( "http://www.bondibeer.com.au/", 100 );
spider.execute( );
spider = new WebSpider(
"http://www.bondibeer.com.au/products/somepub/index.html",
"http://www.bondibeer.com.au/products/", -1 );
spider.execute( );
spider = new WebSpider( "http://www.bondibeer.com.au/" );
spider.WebPageProcessor.ContentHandler +=
new WebPageContentDelegate( FunnyJokes );
spider.WebPageProcessor.ContentHandler +=
new WebPageContentDelegate( SexyWomen );
spider.execute( );
private void FunnyJokes( WebPageState state )
{
if( state.Content.IndexOf( "Funny Joke" ) > -1 )
{
}
}
private void SexyWomen( WebPageState state )
{
Match m = RegExUtil.GetMatchRegEx(
RegularExpression.SrcExtractor, state.Content );
string image;
while( m.Success )
{
m = m.NextMatch( );
image = m.Groups[1].ToString( ).toLowerCase( );
if ( image.indexOf( "sexy" ) > -1 ||
image.indexOf( "women" ) > -1 )
{
DownloadImage( image );
}
}
}
Conclusion
The WebSpider is flexible enough to be used in a variety of useful scenarios, and could be powerful tool for Data Mining websites on the Internet and Intranet. I would like to here how people have used this code.
Outstanding Issues
These issues are minor but if anyone has any ideas then please share them.
- state.ProcessInstructions - This is really just a quick hack to provide instructions that the content handlers can use as they see fit. I am looking for a more elegant solution to this problem.
- MultiThreaded Spider - This project 1st started of as a multi threaded spider but that soon fell by the wayside when I found that performance was much slower using threads to process each URI. It seems that the bottle neck is in the GetResponse, which does not seem to run in multiple threads.
- Valid URI, but the query data that returns a bad page. - The current processor does not handle the scenario where the URI points to a valid page, but the page returned by the webserver is considered to be bad. Eg. http://www.validhost.com/validpage.html?opensubpage=invalidid. One idea to to resolve this problem is to read the contents of a returned page and look for key pieces of information but that technique is a little flakey.