Click here to Skip to main content
16,006,065 members
Articles / Multimedia / GDI+
Article

Motion Detection Algorithms

Rate me:
Please Sign up or sign in to vote.
4.95/5 (683 votes)
27 Mar 2007GPL39 min read 9.4M   128.1K   1.4K   1.4K
Some approaches to detect motion in a video stream.

Motion Detector

Introduction

There are many approaches for motion detection in a continuous video stream. All of them are based on comparing of the current video frame with one from the previous frames or with something that we'll call background. In this article, I'll try to describe some of the most common approaches.

In description of these algorithms I'll use the AForge.NET framework, which is described in some other articles on Code Project: [1], [2]. So, if you are common with it, it will only help.

The demo application supports the following types of video sources:

  • AVI files (using Video for Windows, interop library is included);
  • updating JPEG from internet cameras;
  • MJPEG (motion JPEG) streams from different internet cameras;
  • local capture device (USB cameras or other capture devices, DirectShow interop library is included).

Algorithms

One of the most common approaches is to compare the current frame with the previous one. It's useful in video compression when you need to estimate changes and to write only the changes, not the whole frame. But it is not the best one for motion detection applications. So, let me describe the idea more closely.

Assume that we have an original 24 bpp RGB image called current frame (image), a grayscale copy of it (currentFrame) and previous video frame also gray scaled (backgroundFrame). First of all, let's find the regions where these two frames are differing a bit. For the purpose we can use Difference and Threshold filters.

C#
// create filters
Difference differenceFilter = new Difference( );
IFilter thresholdFilter = new Threshold( 15 );
// set backgroud frame as an overlay for difference filter
differenceFilter.OverlayImage = backgroundFrame;
// apply the filters
Bitmap tmp1 = differenceFilter.Apply( currentFrame );
Bitmap tmp2 = thresholdFilter.Apply( tmp1 );

On this step we'll get an image with white pixels on the place where the current frame is different from the previous frame on the specified threshold value. It's already possible to count the pixels, and if the amount of it will be greater than a predefined alarm level we can signal about a motion event.

But most cameras produce a noisy image, so we'll get motion in such places, where there is no motion at all. To remove random noisy pixels, we can use an Erosion filter, for example. So, we'll get now mostly only the regions where the actual motion was.

C#
// create filter
IFilter erosionFilter = new Erosion( );
// apply the filter 
Bitmap tmp3 = erosionFilter.Apply( tmp2 );

The simplest motion detector is ready! We can highlight the motion regions if needed.

C#
// extract red channel from the original image
IFilter extrachChannel = new ExtractChannel( RGB.R );
Bitmap redChannel = extrachChannel.Apply( image );
//  merge red channel with motion regions
Merge mergeFilter = new Merge( );
mergeFilter.OverlayImage = tmp3;
Bitmap tmp4 = mergeFilter.Apply( redChannel );
// replace red channel in the original image
ReplaceChannel replaceChannel = new ReplaceChannel( RGB.R );
replaceChannel.ChannelImage = tmp4;
Bitmap tmp5 = replaceChannel.Apply( image );

Here is the result of it:

Simplest motion detector

From the above picture we can see the disadvantages of the approach. If the object is moving smoothly we'll receive small changes from frame to frame. So, it's impossible to get the whole moving object. Things become worse, when the object is moving so slowly, when the algorithms will not give any result at all.

There is another approach. It's possible to compare the current frame not with the previous one but with the first frame in the video sequence. So, if there were no objects in the initial frame, comparison of the current frame with the first one will give us the whole moving object independently of its motion speed. But, the approach has a big disadvantage - what will happen, if there was, for example, a car on the first frame, but then it is gone? Yes, we'll always have motion detected on the place, where the car was. Of course, we can renew the initial frame sometimes, but still it will not give us good results in the cases where we can not guarantee that the first frame will contain only static background. But, there can be an inverse situation. If I'll put a picture on the wall in the room? I'll get motion detected until the initial frame will be renewed.

The most efficient algorithms are based on building the so called background of the scene and comparing each current frame with the background. There are many approaches to build the scene, but most of them are too complex. I'll describe here my approach for building the background. It's rather simple and can be realized very quickly.

As in the previous case, let's assume that we have an original 24 bpp RGB image called current frame (image), a grayscale copy of it (currentFrame) and a background frame also gray scaled (backgroundFrame). At the beginning, we get the first frame of the video sequence as the background frame. And then we'll always compare the current frame with the background one. But it will give us the result I've described above, which we obviously don't want very much. Our approach is to "move" the background frame to the current frame on the specified amount (I've used 1 level per frame). We move the background frame slightly in the direction of the current frame - we are changing colors of pixels in the background frame by one level per frame.

C#
// create filter
MoveTowards moveTowardsFilter = new MoveTowards( );
// move background towards current frame
moveTowardsFilter.OverlayImage = currentFrame;
Bitmap tmp = moveTowardsFilter.Apply( backgroundFrame );
// dispose old background
backgroundFrame.Dispose( );
backgroundFrame = tmp;

And now, we can use the same approach we've used above. But, let me extend it slightly to get a more interesting result.

C#
// create processing filters sequence
FiltersSequence processingFilter = new FiltersSequence( );
processingFilter.Add( new Difference( backgroundFrame ) );
processingFilter.Add( new Threshold( 15 ) );
processingFilter.Add( new Opening( ) );
processingFilter.Add( new Edges( ) );
// apply the filter
Bitmap tmp1 = processingFilter.Apply( currentFrame );

// extract red channel from the original image
IFilter extrachChannel = new ExtractChannel( RGB.R );
Bitmap redChannel = extrachChannel.Apply( image );
//  merge red channel with moving object borders
Merge mergeFilter = new Merge( );
mergeFilter.OverlayImage = tmp1;
Bitmap tmp2 = mergeFilter.Apply( redChannel );
// replace red channel in the original image
ReplaceChannel replaceChannel = new ReplaceChannel( RGB.R );
replaceChannel.ChannelImage = tmp2;
Bitmap tmp3 = replaceChannel.Apply( image );
Motion detector - 2nd approach

Now it looks much better!

There is another approach based on the idea. As in the previous cases, we have an original frame and a gray scaled version of it and of the background frame. But let's apply Pixellate filter to the current frame and to the background before further processing.

C#
// create filter
IFilter pixellateFilter = new Pixellate( );
// apply the filter
Bitmap newImage = pixellateFilter( image );

So, we have pixellated versions of the current and background frames. Now, we need to move the background frame towards the current frame as we were doing before. The next change is only the main processing step:

C#
// create processing filters sequence
FiltersSequence processingFilter = new FiltersSequence( );
processingFilter.Add( new Difference( backgroundFrame ) );
processingFilter.Add( new Threshold( 15 ) );
processingFilter.Add( new Dilatation( ) );
processingFilter.Add( new Edges( ) );
// apply the filter
Bitmap tmp1 = processingFilter.Apply( currentFrame );

After merging tmp1 image with the red channel of the original image, we'll get the following image:

Motion detector - 3rd approach

May be it looks not so perfect as the previous one, but the approach has a great possibility for performance optimization.

Looking at the previous picture, we can see, that objects are highlighted with a curve, which represents the moving object's boundary. But sometimes it's more likely to get a rectangle of the object. Not only this, what to do if we want not just highlight the objects, but get their count, position, width and height? Recently I was thinking: "Hmmm, it's possible, but not so trivial". Don't be afraid, it's easy. It can be done using the BlobCounter class from my imaging library, which was developed recently. Using BlobCounter we can get the number of objects, their position and the dimension on a binary image. So, let's try to apply it. We'll apply it to the binary image containing moving objects, the result of Threshold filter.

BlobCounter blobCounter = new BlobCounter( );
...
// get object rectangles
blobCounter.ProcessImage( thresholdedImage );
Rectangle[] rects = BlobCounter.GetObjectRectangles( );
// create graphics object from initial image
Graphics g = Graphics.FromImage( image );
// draw each rectangle
using ( Pen pen = new Pen( Color.Red, 1 ) )
{
    foreach ( Rectangle rc in rects )
    {
        g.DrawRectangle( pen, rc );

        if ( ( rc.Width > 15 ) && ( rc.Height > 15 ) )
        {
            // here we can higligh large objects with something else
        }
    }
}
g.Dispose( );

Here is the result of this small piece of code. Looks pretty. Oh, I forgot. In my original implementation, there is some code instead of that comment for processing large objects. So, we can see a small numbers on the objects.

Motion detector - 4th approach

[14.06.2006] There was a lot of complains that the idea of MoveTowards filter, which is used for updating background image, is hard to understand. So, I was thinking a little bit about changing this filter to something else, which is clearer to understand. And the solution is to use Morph filer, which became available in 2.4 version of AForge.Imaging library. The new filter has two benefits:

  • It is much more simpler to understand;
  • The implementation of the filter is more efficient, so the filter produce better performance.

The idea of the filter is to preserve specified percentage of the source filter and to add missing percentage from overlay image. So, if the filter was applied to source image with percent value equal to 60%, then the result image will contain 60% of source image and 40% of overlay image. Applying the filter with percent values around 90% makes background image changing continuously to current frame.

Motion Alarm

It is pretty easy to add motion alarm feature to all these motion detection algorithms. Each algorithm calculates a binary image containing difference between current frame and the background one. So, the only we need is to just calculate the amount of white pixels on this difference image.

// Calculate white pixels
private int CalculateWhitePixels( Bitmap image )
{
    int count = 0;
    // lock difference image
    BitmapData data = image.LockBits( new Rectangle( 0, 0, width, height ),
        ImageLockMode.ReadOnly, PixelFormat.Format8bppIndexed );
    int offset = data.Stride - width;
    unsafe
    {
        byte * ptr = (byte *) data.Scan0.ToPointer( );
        for ( int y = 0; y < height; y++ )
        {
            for ( int x = 0; x < width; x++, ptr++ )
            {
                count += ( (*ptr) >> 7 );
            }
            ptr += offset;
        }
    }
    // unlock image
    image.UnlockBits( data );
    return count;
}

For some algorithms it could be done even simpler. For example, in blob counting approach we can accumulate not the white pixels count, but the area of each detected object. Then, if the computed amount of changes is greater than a predefined value, we can fire an alarm event.

Video Saving

There are many different ways to process motion alarm event: just draw a blinking rectangle around the video, or play sound to attract attention. But, of course, the most useful one is video saving on motion detection. In the demo application I was using the AVIWriter class, which uses Video for Windows interop to provide AVI files saving capabilities. Here is the small sample of using the class to write small AVI file, which draw diagonal line:

SaveFileDialog sfd = new SaveFileDialog( );
if ( sfd.ShowDialog( ) == DialogResult.OK )
{
    AVIWriter writer = new AVIWriter( "wmv3" );
    try
    {
        writer.Open( sfd.FileName, 320, 240 );
        Bitmap bmp = new Bitmap( 320, 240, PixelFormat.Format24bppRgb );
        for ( int i = 0; i < 100; i++ )
        {
            bmp.SetPixel( i, i, Color.FromArgb( i, 0, 255 - i ) );
            writer.AddFrame( bmp );
        }
        bmp.Dispose( );
    }
    catch ( ApplicationException ex )
    {
    }
    writer.Dispose( );
}

Note: In this small sample and in the demo application I was using Windows Media Video 9 VCM codec.

AForge.NET framework

The Motion Detection application is based on the AForge.NET framework, which provides all the filters and image processing routines used in this application. To get more information about the framework, you may read dedicated article on Code Project or visit project's home page, where you can get all the latest information about it, participate in a discussion group or submit issues or requests for enhancements.

Applications for motion detection

Some people ask me one question from time to time, which is a little bit strange to me. The question is "What is the application for motion detectors". There is a lot to do with them and it depends on the imagination. One of the most straight forward applications is video surveillance, but it is not the only one. Since the first release of this application, I've received many e-mails from different people, who applied this application to incredible things. Some of them have their own articles, so you can take a look:

Conclusion

I've described only ideas here. To use these ideas in real applications, you need to optimize its realization. I've used an image processing library for simplicity, it's not a video processing library. Besides, the library allows me to research different areas more quickly, than to write optimized solutions from the beginning. A small sample of optimization can be found in the sources.

History

  • [20.04.2007] - 1.5
    • Project converted to .NET 2.0;
    • Integrated with AForge.NET framework;
    • Motion detectors updated to use new features of AForge.NET to speed-up processing.
  • [15.06.2006] - 1.4 - Added fifth method based of Morph filter of AForge.Imaging library.
  • [08.04.2006] - 1.3 - Motion alarm and video saving.
  • [22.08.2005] - 1.2 - Added fourth method (getting objects' rectangles with blob counter).
  • [01.06.2005] - 1.1 - Added support of local capture devices and MMS streams.
  • [30.04.2005] - 1.0 - Initial release.

License

This article, along with any associated source code and files, is licensed under The GNU General Public License (GPLv3)


Written By
Software Developer IBM
United Kingdom United Kingdom
Started software development at about 15 years old and it seems like now it lasts most part of my life. Fortunately did not spend too much time with Z80 and BK0010 and switched to 8086 and further. Similar with programming languages – luckily managed to get away from BASIC and Pascal to things like Assembler, C, C++ and then C#. Apart from daily programming for food, do it also for hobby, where mostly enjoy areas like Computer Vision, Robotics and AI. This led to some open source stuff like AForge.NET, Computer Vision Sandbox, cam2web, ANNT, etc.

Comments and Discussions

 
AnswerRe: are thare any problem at mms Pin
Andrew Kirillov6-Aug-06 20:39
Andrew Kirillov6-Aug-06 20:39 
GeneralRe: are thare any problem at mms Pin
muhammetbalcilar7-Aug-06 1:33
muhammetbalcilar7-Aug-06 1:33 
GeneralRe: are thare any problem at mms Pin
Andrew Kirillov7-Aug-06 19:22
Andrew Kirillov7-Aug-06 19:22 
Question3D & Displaying bitmap in pictureBox Pin
Tim W30-Jul-06 8:31
Tim W30-Jul-06 8:31 
AnswerRe: 3D & Displaying bitmap in pictureBox Pin
Andrew Kirillov30-Jul-06 20:49
Andrew Kirillov30-Jul-06 20:49 
QuestionHow can i change the Video Input Pin
ammar7926-Jul-06 23:23
ammar7926-Jul-06 23:23 
AnswerRe: How can i change the Video Input Pin
Andrew Kirillov30-Jul-06 20:55
Andrew Kirillov30-Jul-06 20:55 
GeneralRe: How can i change the Video Input Pin
Dan Thurman31-Jul-06 14:40
Dan Thurman31-Jul-06 14:40 
I have integrated DirectX.Capture and have gotten
DirectX.Capture menus working togehter although the
devices are seperate. I have a 'capture' class and
CaptureDevices class, each working seperately but at
least this gives me a RAD by way of experimenting.

Once I can experimentally verify that I can get the
IAMStreamConfig interfaces/code to work with the grabber,
then I will clean up the code afterwards and into a final
product.

At this time, I was able to change the videoDevice's stream
configuration from my frame default format size of 358x288
to 640x480 but no matter what size I choose, the grabber
image always breaks up from one image into 6 images, flipped
about the hortizontal, black and white and it appears in what
it looks like 4 rows; the first row is ripped up (no images seen)
the 2nd row has 3, 160x120 images, the 3rd row same as 1st row,
and the 4th row same as 2nd row.

For those who want to play around, here is the
code: CaptureDevices.cs. Please Note!!!! Some
code depends on DirectX.Capture... but with a little
work, you can manually add the missing support code.

<br />
// Motion Detector<br />
//<br />
// Copyright © Andrew Kirillov, 2005<br />
// andrew.kirillov@gmail.com<br />
//<br />
namespace VideoSource<br />
{<br />
	using System;<br />
	using System.Drawing;<br />
	using System.Drawing.Imaging;<br />
	using System.IO;<br />
        using System.Reflection;<br />
	using System.Threading;<br />
	using System.Runtime.InteropServices;<br />
	using System.Net;<br />
<br />
	using dshow;<br />
	using dshow.Core;<br />
<br />
	/// <summary><br />
	/// CaptureDevice - capture video from local device<br />
	/// </summary><br />
	public class CaptureDevice : IVideoSource<br />
	{<br />
		private string	source;<br />
        private double  framerate;<br />
        private Size    framesize;<br />
		private object	userData = null;<br />
		private int		framesReceived;<br />
<br />
        // Configuration streams<br />
        private IAMStreamConfig videoStreamConfig = null;<br />
<br />
		private Thread	thread = null;<br />
		private ManualResetEvent stopEvent = null;<br />
<br />
		// new frame event<br />
		public event CameraEventHandler NewFrame;<br />
<br />
		// VideoSource property<br />
		public virtual string VideoSource<br />
		{<br />
			get { return source; }<br />
			set { source = value; }<br />
		}<br />
<br />
        // Frame Rate<br />
        public double FrameRate<br />
        {<br />
            get { return framerate; }<br />
            set { framerate = value; }<br />
        }<br />
<br />
        // Frame Size<br />
        public Size FrameSize<br />
        {<br />
            get { return framesize; }<br />
            set { framesize = value; }<br />
        }<br />
        <br />
        // Login property<br />
		public string Login<br />
		{<br />
			get { return null; }<br />
			set { }<br />
		}<br />
		// Password property<br />
		public string Password<br />
		{<br />
			get { return null; }<br />
			set {  }<br />
		}<br />
		// FramesReceived property<br />
		public int FramesReceived<br />
		{<br />
			get<br />
			{<br />
				int frames = framesReceived;<br />
				framesReceived = 0;<br />
				return frames;<br />
			}<br />
		}<br />
		// BytesReceived property<br />
		public int BytesReceived<br />
		{<br />
			get { return 0; }<br />
		}<br />
		// UserData property<br />
		public object UserData<br />
		{<br />
			get { return userData; }<br />
			set { userData = value; }<br />
		}<br />
		// Get state of the video source thread<br />
		public bool Running<br />
		{<br />
			get<br />
			{<br />
				if (thread != null)<br />
				{<br />
					if (thread.Join(0) == false)<br />
						return true;<br />
<br />
					// the thread is not running, so free resources<br />
					Free();<br />
				}<br />
				return false;<br />
			}<br />
		}<br />
<br />
<br />
		// Constructor<br />
		public CaptureDevice()<br />
		{<br />
		}<br />
<br />
		// Start work<br />
		public void Start()<br />
		{<br />
			if (thread == null)<br />
			{<br />
				framesReceived = 0;<br />
<br />
				// create events<br />
				stopEvent	= new ManualResetEvent(false);<br />
				<br />
				// create and start new thread<br />
				thread = new Thread(new ThreadStart(WorkerThread));<br />
				thread.Name = source;<br />
				thread.Start();<br />
			}<br />
		}<br />
<br />
		// Signal thread to stop work<br />
		public void SignalToStop()<br />
		{<br />
			// stop thread<br />
			if (thread != null)<br />
			{<br />
				// signal to stop<br />
				stopEvent.Set();<br />
			}<br />
		}<br />
<br />
		// Wait for thread stop<br />
		public void WaitForStop()<br />
		{<br />
			if (thread != null)<br />
			{<br />
				// wait for thread stop<br />
				thread.Join();<br />
<br />
				Free();<br />
			}<br />
		}<br />
<br />
		// Abort thread<br />
		public void Stop()<br />
		{<br />
			if (this.Running)<br />
			{<br />
				thread.Abort();<br />
				// WaitForStop();<br />
			}<br />
		}<br />
<br />
		// Free resources<br />
		private void Free()<br />
		{<br />
			thread = null;<br />
<br />
			// release events<br />
			stopEvent.Close();<br />
			stopEvent = null;<br />
		}<br />
<br />
		// Thread entry point<br />
		public void WorkerThread()<br />
		{<br />
            int hr;<br />
            Guid cat;<br />
            Guid med;<br />
<br />
            // grabber<br />
			Grabber	grabber = new Grabber(this);<br />
<br />
			// objects<br />
			object	graphObj = null;<br />
			object	grabberObj = null;<br />
<br />
			// interfaces<br />
			IGraphBuilder	        graphBuilder = null;<br />
                        ICaptureGraphBuilder2   captureGraphBuilder = null;<br />
			IBaseFilter		videoDeviceFilter = null;<br />
			IBaseFilter		grabberFilter = null;<br />
			ISampleGrabber	        sg = null;<br />
			IMediaControl	        mc = null;<br />
<br />
			try<br />
			{                <br />
                // Make a new filter graph<br />
                graphObj = Activator.CreateInstance(<br />
                       Type.GetTypeFromCLSID(Clsid.FilterGraph, true));<br />
                graphBuilder = (IGraphBuilder)graphObj;<br />
<br />
                // Get the Capture Graph Builder<br />
                Guid clsid = Clsid.CaptureGraphBuilder2;<br />
                Guid riid = typeof(ICaptureGraphBuilder2).GUID;<br />
                captureGraphBuilder = (ICaptureGraphBuilder2)<br />
                   TempFix.CreateDsInstance(ref clsid, ref riid);<br />
<br />
                // Link the CaptureGraphBuilder to the filter graph<br />
                hr = captureGraphBuilder.SetFiltergraph(graphBuilder);<br />
                if (hr < 0) Marshal.ThrowExceptionForHR(hr);<br />
<br />
                // Get the video device and add it to the filter graph<br />
                if (source != null)<br />
                {<br />
                    videoDeviceFilter = (IBaseFilter)<br />
                            Marshal.BindToMoniker(source);<br />
                    hr = graphBuilder.AddFilter(videoDeviceFilter,<br />
                              "Video Capture Device");<br />
                    if (hr < 0) Marshal.ThrowExceptionForHR(hr);<br />
                }<br />
<br />
                // create sample grabber, object and filter<br />
                grabberObj = Activator.CreateInstance(<br />
                     Type.GetTypeFromCLSID(Clsid.SampleGrabber, true));<br />
                grabberFilter = (IBaseFilter)grabberObj;<br />
                sg = (ISampleGrabber)grabberObj;<br />
<br />
                // add sample grabber filter to filter graph<br />
                hr = graphBuilder.AddFilter(grabberFilter, "grabber");<br />
                if (hr < 0) Marshal.ThrowExceptionForHR(hr);<br />
<br />
                // Try looking for an video device interleaved media type<br />
                IBaseFilter testFilter = videoDeviceFilter;<br />
                    // grabberFilter (not supported)<br />
                object o;<br />
                cat = PinCategory.Capture;<br />
                med = MediaType.Interleaved;<br />
                Guid iid = typeof(IAMStreamConfig).GUID;<br />
                hr = captureGraphBuilder.FindInterface(<br />
                    ref cat, ref med, testFilter, ref iid, out o);<br />
<br />
                if (hr != 0)<br />
                {<br />
                    // If not found, try looking for a video media type<br />
                    med = MediaType.Video;<br />
                    hr = captureGraphBuilder.FindInterface(<br />
                        ref cat, ref med, testFilter, ref iid, out o);<br />
<br />
                    if (hr != 0)<br />
                        o = null;<br />
                }<br />
                // Set the video stream configuration to data member<br />
                videoStreamConfig = o as IAMStreamConfig;<br />
                o = null;<br />
                <br />
                // Experimental testing: Try to set the Frame Size & Rate<br />
                // Results: When enabled, the grabber video breaks up into<br />
                //          several duplicate frames (6 frames)<br />
                bool bdebug = true;<br />
                if(bdebug)<br />
                {<br />
                    BitmapInfoHeader bmiHeader;<br />
                    bmiHeader = (BitmapInfoHeader)<br />
                       getStreamConfigSetting(videoStreamConfig, "BmiHeader");<br />
                    bmiHeader.Width = framesize.Width;<br />
                    bmiHeader.Height = framesize.Height;<br />
                    setStreamConfigSetting(videoStreamConfig,<br />
                            "BmiHeader", bmiHeader);<br />
<br />
                    long avgTimePerFrame = (long)(10000000 / framerate);<br />
                    setStreamConfigSetting(videoStreamConfig, <br />
                        "AvgTimePerFrame", avgTimePerFrame);<br />
                }<br />
                <br />
                // connect pins (Turns on the video device)<br />
                if (graphBuilder.Connect(DSTools.GetOutPin(<br />
                       videoDeviceFilter, 0), <br />
                       DSTools.GetInPin(grabberFilter, 0)) < 0)<br />
                    throw new ApplicationException(<br />
                         "Failed connecting filters");<br />
<br />
                // Set the sample grabber media type settings<br />
                AMMediaType mt = new AMMediaType();<br />
                mt.majorType = MediaType.Video;<br />
                mt.subType = MediaSubType.RGB24;<br />
                sg.SetMediaType(mt);<br />
<br />
                // get media type<br />
				if (sg.GetConnectedMediaType(mt) == 0)<br />
				{<br />
					VideoInfoHeader vih = (VideoInfoHeader) Marshal.PtrToStructure(mt.formatPtr, typeof(VideoInfoHeader));<br />
                    System.Diagnostics.Debug.WriteLine("width = " + vih.BmiHeader.Width + ", height = " + vih.BmiHeader.Height);<br />
                    grabber.Width = vih.BmiHeader.Width;<br />
                    grabber.Height = vih.BmiHeader.Height;<br />
					mt.Dispose();<br />
				}<br />
<br />
                // render<br />
				graphBuilder.Render(DSTools.GetOutPin(grabberFilter, 0));<br />
<br />
				// Set various sample grabber properties<br />
				sg.SetBufferSamples(false);<br />
				sg.SetOneShot(false);<br />
				sg.SetCallback(grabber, 1);<br />
<br />
				// Do not show active (source) window<br />
				IVideoWindow win = (IVideoWindow) graphObj;<br />
                win.put_AutoShow(false);<br />
				win = null;<br />
<br />
				// get media control<br />
				mc = (IMediaControl) graphObj;<br />
<br />
				// run<br />
				mc.Run();<br />
<br />
				while (!stopEvent.WaitOne(0, true))<br />
				{<br />
					Thread.Sleep(100);<br />
				}<br />
				mc.StopWhenReady();<br />
			}<br />
			// catch any exceptions<br />
			catch (Exception e)<br />
			{<br />
				System.Diagnostics.Debug.WriteLine("----: " + e.Message);<br />
			}<br />
			// finalization block<br />
			finally<br />
			{<br />
				// release all objects<br />
				mc = null;<br />
				graphBuilder = null;<br />
                captureGraphBuilder = null;<br />
				videoDeviceFilter = null;<br />
				grabberFilter = null;<br />
				sg = null;<br />
<br />
				if (graphObj != null)<br />
				{<br />
					Marshal.ReleaseComObject(graphObj);<br />
					graphObj = null;<br />
				}<br />
				if (grabberObj != null)<br />
				{<br />
					Marshal.ReleaseComObject(grabberObj);<br />
					grabberObj = null;<br />
				}<br />
			}<br />
		}<br />
<br />
		// new frame<br />
		protected void OnNewFrame(Bitmap image)<br />
		{<br />
			framesReceived++;<br />
			if ((!stopEvent.WaitOne(0, true)) && (NewFrame != null))<br />
				NewFrame(this, new CameraEventArgs(image));<br />
		}<br />
<br />
        // Grabber<br />
		private class Grabber : ISampleGrabberCB<br />
		{<br />
			private CaptureDevice parent;<br />
			private int width, height;<br />
<br />
			// Width property<br />
			public int Width<br />
			{<br />
				get { return width; }<br />
				set { width = value; }<br />
			}<br />
			// Height property<br />
			public int Height<br />
			{<br />
				get { return height; }<br />
				set { height = value; }<br />
			}<br />
<br />
			// Constructor<br />
			public Grabber(CaptureDevice parent)<br />
			{<br />
				this.parent = parent;<br />
			}<br />
<br />
			//<br />
			public int SampleCB(double SampleTime, IntPtr pSample)<br />
			{<br />
				return 0;<br />
			}<br />
<br />
			// Callback method that receives a pointer to the sample buffer<br />
			public int BufferCB(double SampleTime, IntPtr pBuffer, int BufferLen)<br />
			{<br />
				// create new image<br />
				System.Drawing.Bitmap img = new Bitmap(width, height, PixelFormat.Format24bppRgb);<br />
<br />
				// lock bitmap data<br />
				BitmapData	bmData = img.LockBits(<br />
					new Rectangle(0, 0, width, height),<br />
					ImageLockMode.ReadWrite,<br />
					PixelFormat.Format24bppRgb);<br />
<br />
				// copy image data<br />
				int srcStride = bmData.Stride;<br />
				int dstStride = bmData.Stride;<br />
<br />
				int dst = bmData.Scan0.ToInt32() + dstStride * (height - 1);<br />
				int src = pBuffer.ToInt32();<br />
<br />
				for (int y = 0; y < height; y++)<br />
				{<br />
					Win32.memcpy(dst, src, srcStride);<br />
					dst -= dstStride;<br />
					src += srcStride;<br />
				}<br />
<br />
				// unlock bitmap data<br />
				img.UnlockBits(bmData);<br />
<br />
				// notify parent<br />
				parent.OnNewFrame(img);<br />
<br />
				// release the image<br />
				img.Dispose();<br />
<br />
				return 0;<br />
			}<br />
		}<br />
<br />
        protected object getStreamConfigSetting(IAMStreamConfig streamConfig, string fieldName)<br />
        {<br />
            if (streamConfig == null)<br />
                throw new NotSupportedException();<br />
<br />
            object returnValue = null;<br />
            IntPtr pmt = IntPtr.Zero;<br />
            AMMediaType mediaType = new AMMediaType();<br />
<br />
            try<br />
            {<br />
                // Get the current format info<br />
                int hr = streamConfig.GetFormat(out pmt);<br />
                if (hr != 0)<br />
                    Marshal.ThrowExceptionForHR(hr);<br />
                Marshal.PtrToStructure(pmt, mediaType);<br />
<br />
                // The formatPtr member points to different structures<br />
                // dependingon the formatType<br />
                object formatStruct;<br />
                if (mediaType.formatType == FormatType.WaveEx)<br />
                    formatStruct = new WaveFormatEx();<br />
                else if (mediaType.formatType == FormatType.VideoInfo)<br />
                    formatStruct = new VideoInfoHeader();<br />
                else if (mediaType.formatType == FormatType.VideoInfo2)<br />
                    formatStruct = new VideoInfoHeader2();<br />
                else<br />
                    throw new NotSupportedException("This device does not support a recognized format block.");<br />
<br />
                // Retrieve the nested structure<br />
                Marshal.PtrToStructure(mediaType.formatPtr, formatStruct);<br />
<br />
                // Find the required field<br />
                Type structType = formatStruct.GetType();<br />
                FieldInfo fieldInfo = structType.GetField(fieldName);<br />
                if (fieldInfo == null)<br />
                    throw new NotSupportedException("Unable to find the member '" + fieldName + "' in the format block.");<br />
<br />
                // Extract the field's current value<br />
                returnValue = fieldInfo.GetValue(formatStruct);<br />
<br />
            }<br />
            finally<br />
            {<br />
                //DsUtils.FreeAMMediaType(mediaType);<br />
                Marshal.FreeCoTaskMem(pmt);<br />
            }<br />
<br />
            return (returnValue);<br />
        }<br />
        <br />
        protected object setStreamConfigSetting(IAMStreamConfig streamConfig, string fieldName, object newValue)<br />
        {<br />
            if (streamConfig == null)<br />
                throw new NotSupportedException();<br />
<br />
            object returnValue = null;<br />
            IntPtr pmt = IntPtr.Zero;<br />
            AMMediaType mediaType = new AMMediaType();<br />
<br />
            try<br />
            {<br />
                // Get the current format info<br />
                int hr = streamConfig.GetFormat(out pmt);<br />
                if (hr != 0)<br />
                    Marshal.ThrowExceptionForHR(hr);<br />
                Marshal.PtrToStructure(pmt, mediaType);<br />
<br />
                // The formatPtr member points to different structures<br />
                // dependingon the formatType<br />
                object formatStruct;<br />
                if (mediaType.formatType == FormatType.WaveEx)<br />
                    formatStruct = new WaveFormatEx();<br />
                else if (mediaType.formatType == FormatType.VideoInfo)<br />
                    formatStruct = new VideoInfoHeader();<br />
                else if (mediaType.formatType == FormatType.VideoInfo2)<br />
                    formatStruct = new VideoInfoHeader2();<br />
                else<br />
                    throw new NotSupportedException("This device does not support a recognized format block.");<br />
<br />
                // Retrieve the nested structure<br />
                Marshal.PtrToStructure(mediaType.formatPtr, formatStruct);<br />
<br />
                // Find the required field<br />
                Type structType = formatStruct.GetType();<br />
                FieldInfo fieldInfo = structType.GetField(fieldName);<br />
                if (fieldInfo == null)<br />
                    throw new NotSupportedException("Unable to find the member '" + fieldName + "' in the format block.");<br />
<br />
                // Update the value of the field<br />
                fieldInfo.SetValue(formatStruct, newValue);<br />
<br />
                // PtrToStructure copies the data so we need to copy it back<br />
                Marshal.StructureToPtr(formatStruct, mediaType.formatPtr, false);<br />
<br />
                // Save the changes<br />
                hr = streamConfig.SetFormat(mediaType);<br />
                if (hr != 0)<br />
                    Marshal.ThrowExceptionForHR(hr);<br />
            }<br />
            finally<br />
            {<br />
                //DsUtils.FreeAMMediaType(mediaType);<br />
                Marshal.FreeCoTaskMem(pmt);<br />
            }<br />
<br />
            return (returnValue);<br />
        }<br />
<br />
	} // Main Class<br />
} // Namespace<br />

Generalcomparing images Pin
Bob Cooke26-Jul-06 4:40
Bob Cooke26-Jul-06 4:40 
GeneralRe: comparing images Pin
Andrew Kirillov30-Jul-06 20:09
Andrew Kirillov30-Jul-06 20:09 
GeneralGet difference portion of two bitmap Pin
Duy Linh19-Jul-06 22:48
Duy Linh19-Jul-06 22:48 
Questionkeyframing Pin
menespiniecio17-Jul-06 7:23
menespiniecio17-Jul-06 7:23 
AnswerRe: keyframing Pin
Andrew Kirillov17-Jul-06 19:55
Andrew Kirillov17-Jul-06 19:55 
QuestionProblems with the port number Pin
Robin Groenevelt17-Jul-06 6:25
Robin Groenevelt17-Jul-06 6:25 
AnswerRe: Problems with the port number Pin
Andrew Kirillov17-Jul-06 20:07
Andrew Kirillov17-Jul-06 20:07 
AnswerRe: Problems with the port number Pin
Andrew Kirillov18-Jul-06 8:26
Andrew Kirillov18-Jul-06 8:26 
Questionmotion thresholds Pin
menespiniecio17-Jul-06 0:18
menespiniecio17-Jul-06 0:18 
AnswerRe: motion thresholds Pin
Andrew Kirillov17-Jul-06 19:59
Andrew Kirillov17-Jul-06 19:59 
GeneralCan't access the JPEG or MPEG URLs Pin
SuryaAki14-Jul-06 2:54
SuryaAki14-Jul-06 2:54 
GeneralTrying to add support for selected video device properties Pin
Dan Thurman13-Jul-06 11:46
Dan Thurman13-Jul-06 11:46 
Questionvideo playback thread Pin
menespiniecio9-Jul-06 23:10
menespiniecio9-Jul-06 23:10 
AnswerRe: video playback thread Pin
Andrew Kirillov10-Jul-06 10:32
Andrew Kirillov10-Jul-06 10:32 
Generalmms stream Pin
sdffga7-Jul-06 5:26
sdffga7-Jul-06 5:26 
GeneralMotion Detection by rectangling arms torsos etc. Pin
okanguz6-Jul-06 5:00
okanguz6-Jul-06 5:00 
GeneralBackground appear in transparence Pin
matteo2530-Jun-06 4:56
matteo2530-Jun-06 4:56 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.