Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

C#/VB - Automated WebSpider / WebRobot

0.00/5 (No votes)
15 Mar 2004 1  
Build a flexible WebRobot and process an entire site using a WebSpider

Introduction

What is a WebSpider

A WebSpider or crawler is an automated program that follows links on websites and calls a WebRobot to handle the contents of each link.

What is a WebRobot

A WebRobot is a program that processes the content found through a link, a WebRobot can be used for indexing a page or extracting useful information based on a predefined query, common examples are - Link checkers, e-mail address extractors, multimedia extractors and update watchers.

Background

I had a recent contract to build a web page link checker, this component had to be able to check links that were stored in a database as well as to check links on a website, both through the local file system and over the internet.

This article explains the WebRobot, the WebSpider and how to enhance the WebRobot through specialized content handlers, the code shown has some superfluous code such try blocks, variable initialization and minor methods removed.

Class overview

The classes that make up the WebRobot are; WebPageState, which represents a URI and its current state in the process chain and an implementation of IWebPageProcessor, which performs the actual reading of the URI, calling content handlers and dealing with page errors.

The WebSpider has only one class WebSpider, this maintains a list of pending/processed URI's contained in a list of WebPageState objects and runs WebPageProcessor against each WebPageState to extract links to other pages and to test whether the URI's are valid.

Using the code - WebRobot

Web page processing is handled by an object that implements IWebPageProcessor. The Process method expects to receive a WebPageState, this will be updated during page processing and if all is successful the method will return true. Any number of content handlers can be also be called after the page has been read, by assigning WebPageContentDelegate delegates to the processor.

public delegate void WebPageContentDelegate( WebPageState state );

public interface IWebPageProcessor
{
   bool Process( WebPageState state );

   WebPageContentDelegate ContentHandler { get; set; }
}

The WebPageState object holds state and content information for the URI being processed. All properties of this object are read/write accept for the URI which must be passed in through the constructor.

public class WebPageState
{
   private WebPageState( ) {}

   public WebPageState( Uri uri )
   {
      m_uri             = uri;
   }

   public WebPageState( string uri )
      : this( new Uri( uri ) ) { }

   Uri      m_uri;                           // URI to be processed

   string   m_content;                       // Content of webpage

   string   m_processInstructions   = "";    // User defined instructions 

                 // for content handlers

   bool     m_processStarted        = false; 
                // Becomes true when processing starts

   bool     m_processSuccessfull    = false; 
                // Becomes true if process was successful

   string   m_statusCode;                    
                // HTTP status code

   string   m_statusDescription;             
               // HTTP status description, or exception message


   // Standard Getters/Setters....

}

The WebPageProcessor is an implementation of the IWebPageProcessor that does the actual work of reading in the content, handling error codes/exceptions and calling the content handlers. WebPageProcessor may be replaced or extended to provide additional functionality, though adding a content handler is generally a better option.

   public class WebPageProcessor : IWebPageProcessor
   {
      public bool Process( WebPageState state )
      {
         state.ProcessStarted       = true;
         state.ProcessSuccessfull   = false;

         // Use WebRequest.Create to handle URI's for 

         // the following schemes: file, http & https

         WebRequest  req = WebRequest.Create( state.Uri );
         WebResponse res = null;

         try
         {
            // Issue a response against the request. 

            // If any problems are going to happen they

            // they are likly to happen here in the form of an exception.

            res = req.GetResponse( );

            // If we reach here then everything is likly to be OK.

            if ( res is HttpWebResponse )
            {
               state.StatusCode        = 
                ((HttpWebResponse)res).StatusCode.ToString( );
               state.StatusDescription = 
                ((HttpWebResponse)res).StatusDescription;
            }
            if ( res is FileWebResponse )
            {
               state.StatusCode        = "OK";
               state.StatusDescription = "OK";
            }

            if ( state.StatusCode.Equals( "OK" ) )
            {
               // Read the contents into our state 

               // object and fire the content handlers

               StreamReader   sr    = new StreamReader( 
                 res.GetResponseStream( ) );

               state.Content        = sr.ReadToEnd( );

               if ( ContentHandler != null )
               {
                  ContentHandler( state );
               }
            }

            state.ProcessSuccessfull = true;
         }
         catch( Exception ex )
         {
            HandleException( ex, state );
         }
         finally
         {
            if ( res != null )
            {
               res.Close( );
            }
         }

         return state.ProcessSuccessfull;
      }
   }

   // Store any content handlers

   private WebPageContentDelegate m_contentHandler = null;

   public WebPageContentDelegate ContentHandler
   {
      get { return m_contentHandler; }
      set { m_contentHandler = value; }
   }

There are additonal private methods in the WebPageProcessor to handle HTTP error codes and file not found errors when dealing with the "file://" scheme as well as more severe exceptions.

Using the code - WebSpider

The WebSpider class is really just a harness for calling the WebRobot in a particular way. It provides the robot with a specialized content handler for crawling through web links and maintains a list of both pending pages and already visited pages. The current WebSpider is designed to start from a given URI and to limit full page processing to a base path.

// CONSTRUCTORS

//

// Process a URI, until all links are checked, 

// only add new links for processing if they

// point to the same host as specified in the startUri.

public WebSpider(
   string            startUri
   ) : this ( startUri, -1 ) { }

// As above only limit the links to uriProcessedCountMax.

public WebSpider(
   string            startUri,
   int               uriProcessedCountMax
   ) : this ( startUri, "", uriProcessedCountMax, 
     false, new WebPageProcessor( ) ) { }

// As above, except new links are only added if

// they are on the path specified by baseUri.

public WebSpider(
   string            startUri,
   string            baseUri,
   int               uriProcessedCountMax
   ) : this ( startUri, baseUri, uriProcessedCountMax, 
     false, new WebPageProcessor( ) ) { }

// As above, you can specify whether the web page

// content is kept after it is processed, by

// default this would be false to conserve memory

// when used on large sites.

public WebSpider(
   string            startUri,
   string            baseUri,
   int               uriProcessedCountMax,
   bool              keepWebContent,
   IWebPageProcessor webPageProcessor )
{
   // Initialize web spider ...

}

Why is there a base path limit?

Since there are trillions of pages on the Internet, this spider will check all links that it finds to see if they are valid, but it will only add new links to the pending queue if those links belong within the context of the initial website or sub path of that website.

So if we are starting from www.myhost.com/index.html and this page has link to www.myhost.com/pageWithSomeLinks.html and www.google.com/pageWithManyLinks.html then the WebRobot will be called against both links to check if they are valid but it will only add new links found within www.myhost.com/pageWithSomeLinks.html

Call the Execute method to start the spider. This method will add the startUri to a Queue of pending pages and then call the IWebPageProcessor until there are no pages left to process.

public void Execute( )
{
   AddWebPage( StartUri, StartUri.AbsoluteUri );

   while ( WebPagesPending.Count > 0 &&
      ( UriProcessedCountMax == -1 || UriProcessedCount 
        < UriProcessedCountMax ) )
   {
      WebPageState state = (WebPageState)m_webPagesPending.Dequeue( );

      m_webPageProcessor.Process( state );

      if ( ! KeepWebContent )
      {
         state.Content = null;
      }

      UriProcessedCount++;
   }
}

A web page can only be added to the queue if the Uri "excluding anchor" points to a path or a valid page (e.g. .html, .aspx, .jsp etc...) and has not already been seen before.

private bool AddWebPage( Uri baseUri, string newUri )
{
   Uri      uri      = new Uri( baseUri, 
     StrUtil.LeftIndexOf( newUri, "#" ) );

   if ( ! ValidPage( uri.LocalPath ) || m_webPages.Contains( uri ) )
   {
      return false;
   }
   WebPageState state = new WebPageState( uri );

   if ( uri.AbsoluteUri.StartsWith( BaseUri.AbsoluteUri ) )
   {
      state.ProcessInstructions += "Handle Links";
   }

   m_webPagesPending.Enqueue  ( state );
   m_webPages.Add             ( uri, state );

   return true;
}

Examples of running the spider

The following code shows three examples for calling the WebSpider, the paths shown are examples only, they don't represent the true structure of this website. Note: the Bondi Beer website in the example, is a site that I built using my own SiteGenerator. This easy to use program produces static websites from dynamic content such as proprietary data files, XML / XSLT files, databases, RSS feeds and more...

/*
* Check for broken links found on this website, limit the spider to 100 pages.
*/
WebSpider spider = new WebSpider( "http://www.bondibeer.com.au/", 100 );
spider.execute( );

/*
* Check for broken links found on this website, 
* there is no limit on the number
* of pages, but it will not look for new links on
* pages that are not within the
* path http://www.bondibeer.com.au/products/.  This
* means that the home page found
* at http://www.bondibeer.com.au/home.html may be
* checked for existence if it was
* called from the somepub/index.html but any
* links within that page will not be
* added to the pending list, as there on an a lower path.
*/
spider = new WebSpider(
      "http://www.bondibeer.com.au/products/somepub/index.html",
      "http://www.bondibeer.com.au/products/", -1 );
spider.execute( );

/*
* Check for pages on the website that have funny 
* jokes or pictures of sexy women.
*/
spider = new WebSpider( "http://www.bondibeer.com.au/" );
spider.WebPageProcessor.ContentHandler += 
  new WebPageContentDelegate( FunnyJokes );
spider.WebPageProcessor.ContentHandler += 
  new WebPageContentDelegate( SexyWomen );
spider.execute( );

private void FunnyJokes( WebPageState state )
{
   if( state.Content.IndexOf( "Funny Joke" ) > -1 )
   {
      // Do something

   }
}
private void SexyWomen( WebPageState state )
{
   Match       m     = RegExUtil.GetMatchRegEx( 
     RegularExpression.SrcExtractor, state.Content );
   string      image;

   while( m.Success )
   {
      m     = m.NextMatch( );
      image = m.Groups[1].ToString( ).toLowerCase( );

      if ( image.indexOf( "sexy" ) > -1 || 
        image.indexOf( "women" ) > -1 )
      {
         DownloadImage( image );
      }
   }
}

Conclusion

The WebSpider is flexible enough to be used in a variety of useful scenarios, and could be powerful tool for Data Mining websites on the Internet and Intranet. I would like to here how people have used this code.

Outstanding Issues

These issues are minor but if anyone has any ideas then please share them.

  • state.ProcessInstructions - This is really just a quick hack to provide instructions that the content handlers can use as they see fit. I am looking for a more elegant solution to this problem.
  • MultiThreaded Spider - This project 1st started of as a multi threaded spider but that soon fell by the wayside when I found that performance was much slower using threads to process each URI. It seems that the bottle neck is in the GetResponse, which does not seem to run in multiple threads.
  • Valid URI, but the query data that returns a bad page. - The current processor does not handle the scenario where the URI points to a valid page, but the page returned by the webserver is considered to be bad. Eg. http://www.validhost.com/validpage.html?opensubpage=invalidid. One idea to to resolve this problem is to read the contents of a returned page and look for key pieces of information but that technique is a little flakey.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here