Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Play AVI files in Silverlight 4 using MediaElement and MediaStreamSource

0.00/5 (No votes)
26 Apr 2011 4  
This code demostrates how to use Silverlight with OOB+elevated trust to play a local video (.avi).

Introduction

This article tries to demonstrate the power of the MediaElement and MediaStreamSource classes. In this article, we will try to write some code to play an AVI video located locally on your computer.

MediaStreamImg1.jpg

Background

With the new features introduced into Silverlight 4, I had wanted to try and write a simple application to play an AVI video file. To do this, I had to sacrifice quite some time to do research on the subject. Initially, I played around with the WriteableBitmap but later discovered the powerful capabilities and features provided by the MediaStreamSource class.

This article barely touches the surface of those capabilities provided by the MediaStreamSource class to developers. This article therefore does not delve into decoding video files, it only demonstrates how to buffer samples and provide them to the MediaElement control using a custom class derived from the MediaStreamSource class. The decoding is handled by a DLL (AVIDll.dll) which is also included in the sample which we shall use to return video samples as a byte array. The source of this DLL is not included in this article. It is only a simple wrapper for the methods using P/Invoke and was written in VB6 as an ActiveX DLL. There are a good number of articles out there including some from CodeProject that deal with opening AVI files (using avifil32.dll and other DLLs) such as http://www.codeproject.com/KB/audio-video/avifilewrapper.aspx and a very old yet still very useful website http://www.shrinkwrapvb.com/avihelp/avihelp.htm.

In our sample code, we need to first derive our custom class from System.Windows.Media.MediaStreamSource. This will require us to override a number of methods. Without going into too much detail, the methods are OpenMediaAsync, GetSampleAsync, CloseMedia, SeekAsync, GetDiagnosticsAsync, and SwitchMediaStreamAsync. I will not dig deep into defining these methods, but the ones we shall use in our example code are:

  • OpenMediaAsync: We override this method, and an in it, we initialize and report some metadata about the media by calling the ReportOpenMediaCompleted() method.
  • GetSampleAsync: We override this method and retrieve the next requested sample by the MediaElement. MediaElement will call this method every time it needs a sample. To report back to MediaElement that the sample is ready, we call the ReportGetSampleCompleted() method.

Some good books to read on the subject include 'Silverlight 4 in Action' and 'Silverlight Recipes - A Problem Solution Approach'.

Our main objective in this article is to write a simple Silverlight application that plays back an AVI video. Well, for the video (.avi) to play, you must have the relevant codec on your machine first.

We shall use the following simple steps to achieve our goal:

  • Prepare a simple UI with a MediaElement control, two buttons, and a checkbox.
  • Write our custom class that is derived from System.Windows.Media.MediaStreamSource and override all the required methods.
  • Set our custom class as the source stream to the MediaElement control in UI, behind-code.

This sample was tested using Silverlight 4.

Step 1

  • Create a new Silverlight project (C#) and call it MediaStreamSrc (or whatever name you wish).
  • Open the default created UserControl named MainPage.xaml.
  • Add a MediaElement control and name it mediaPlayer.
  • Add two buttons and name them OpenStream and CloseStream.
  • Add a checkbox and name it chkFlip.

MediaStreamImg2.jpg

Make sure you enable out-of-browser and also check "Require elevated trust".

<Grid x:Name="LayoutRoot" Background="black">

<MediaElement x:Name="MediaPlayer"
        AutoPlay="True"
        Stretch="Uniform"
        Margin="5,35,5,5"
        Opacity="1"
        Width="640"
        Height="480" />

<Button Content="Open AVI File" Height="23" Margin="29,12,0,0" 
  Name="OpenStream" HorizontalAlignment="Left" 
  Width="123" VerticalAlignment="Top"/>

<Button Content="Close AVI File" Height="23" 
  Margin="175,12,0,0" Name="CloseStream" 
  HorizontalAlignment="Left" Width="123" 
  VerticalAlignment="Top"/>

<CheckBox Content="FLIP IMAGE" Height="16" 
  Margin="0,21,12,0" Name="chkFlip" 
  VerticalAlignment="Top" Foreground="#FFD4D4D4" 
  IsChecked="True" HorizontalAlignment="Right" Width="104" />

</Grid>

The MediaElement control mediaPlayer will be used to display our video. The button OpenStream will be used to initialize our custom MediaStreamSource object and assign it as a media stream to mediaPlayer. The button CloseStream will be used to close and stop the stream. chkFlip is a flag to flip our samples.

We will come back to the UI behind code and connect the remaining code.

Step 2

Create a new class and name it MyDerivedMediaStreamSource and override all the required methods.

public class MyDerivedMediaStreamSource : MediaStreamSource
{
    // Declare some variables
    // ...

    protected override void OpenMediaAsync() { }

        protected override void GetSampleAsync() { }

        protected override void SeekAsync( long seekToTime ) { }

        protected override void GetDiagnosticAsync( 
                  MediaStreamSourceDiagnosticKind diagnosticKind ) { }

        protected override void SwitchMediaStreamAsync( 
                  MediaStreamDescription mediaStreamDescription ) { }

        protected override void CloseMedia() { }

        // Other supporting methods ...
}

We then declare some member variables in our derived class:

// Media Stream Description
MediaStreamDescription _videoDesc; 

// best to set these after retrieving actual values from the video metadata
private const int _frameWidth = 640; 
private const int _frameHeight = 480;
public static long _speed = 30; // 30fps

// Rendering time in the media
private long _timeStamp = 0;
                
// Number of bytes of each pixel (4 bytes - RGBA)
// for the samples we supply to MediaElement
private const int _framePixelSize = 4;

// Size in bytes for each Sample of type RGBA (4 bytes per pixel)
private const int _count = _frameHeight * _frameWidth * _framePixelSize;

// Size in bytes of the stream (same as the Sample size in bytes above)
private const int _frameStreamSize = _count;

// Stream to contain a Sample
private MemoryStream _stream = new MemoryStream( _frameStreamSize );

// The Offset into the stream where the actual sample data begins
private int _offset = 0;

// Buffer to hold a collection of type Sample. 
private Queue<Sample> sampleBufferList = new Queue<Sample>();

// Timeout period (fps from video is used). 
private TimeSpan timeout = TimeSpan.FromSeconds((double)_speed);

// Empty Dictionary used in the returned sample.
private Dictionary<MediaSampleAttributeKeys, string> emptyDictionary = 
    new Dictionary<MediaSampleAttributeKeys, string>();

// Total number of Samples to buffer.
// I set to 15 as an example, but you can increase or decrease
private const int numberOfSamplesBuffer = 15;

Above is not the complete set of variables but we shall declare a few more variables relevant to the supporting method defined a bit later in the code. This supporting method is basically used to process samples and put them in a buffer (sampleBufferList declared above).

The OpenMediaAsync() method (see code below) is where we shall perform the initialization of our media and inform the MediaElement that we are ready to supply it with media samples via the overridden method GetSampleAsync(). Now let us add some code to our overridden OpenMediaAsync() method as shown below.

protected override void OpenMediaAsync()
{

    Dictionary<MediaStreamAttributeKeys, string> streamAttributes = 
       new Dictionary<MediaStreamAttributeKeys, string>();

    // We are going to convert our video frame/sample from RGB to RGBA
    streamAttributes[MediaStreamAttributeKeys.VideoFourCC] = "RGBA";

    streamAttributes[MediaStreamAttributeKeys.Height] = _frameHeight.ToString();

    streamAttributes[MediaStreamAttributeKeys.Width] = _frameWidth.ToString();

    _videoDesc = new MediaStreamDescription( MediaStreamType.Video, streamAttributes );

    List<MediaStreamDescription> availableStreams = 
            new List<MediaStreamDescription>();

    availableStreams.Add(_videoDesc);

    Dictionary<MediaSourceAttributesKeys, string> sourceAttributes = 
      new Dictionary<MediaSourceAttributesKeys, string>();

    sourceAttributes[MediaSourceAttributesKeys.Duration] =  
            TimeSpan.FromSeconds(0).Ticks.ToString();

    sourceAttributes[MediaSourceAttributesKeys.CanSeek] = false.ToString();
    ReportOpenMediaCompleted(sourceAttributes, availableStreams);

    return;
}

After initializing and setting the necessary metadata, we inform the MediaElement control that we are ready to start supplying samples by using the ReportOpenMediaCompleted() method. This method accepts two parameters: the first is a Dictionary describing features of the entire media stream and the second is a description of each audio/video stream. In our example, we are only demonstrating one stream, video.

Note that in the code, we are passing the string value "RGBA" for the attribute key VideoFourCC. AVIDll.dll returns a byte stream containing a sample of type RGB. We will add an extra byte that shall represent the extra alpha channel to the RGB => RGBA representing a four channel uncompressed video sample. This is defined as the data needed to instantiate a video codec, a four character value also known as FourCC.

The next two attributes, the Width and Height, are self-explanatory. We then create a new instance of MediaStreamDescription and add it to the availableStream list. When all is done, we call ReportOpenMediaCompleted() and pass in the stream attributes and the available stream.

After calling ReportOpenMediaCompleted(), MediaElement will start playing our media asking for video samples via the overridden GetSampleAsync() method. So here is where we need to somehow acquire a video sample every time our MediaElement calls for one and notify MediaElement when completed.

Let us now write the code for the GetSampleAsync() method. In our implementation, we simply spawn a new thread to handle the process of getting our sample and immediately return from the method.

protected override void GetSampleAsync( MediaStreamType mediaStreamType )
{
    if (mediaStreamType == MediaStreamType.Video)
    {
        // start a thread to get the sample  
        Thread thread = new Thread(new ThreadStart( this.retrieveSampleThread));
        thread.Start();

        return;
    }
}

The thread will go on to checking for a sample. Notice that we defined only one MediaStreamType "Video" when initializing our media in the OpenMediaAsync() method, but for illustration purpose, we check for the type of the media stream the MediaElement is requesting. You need to check the requested media stream type if you added more than one, such as Video and Audio.

The thread above will be responsible for returning a sample to MediaElement when available by calling ReportGetSampleCompleted(), or inform MediaElement that we are still buffering by calling ReportGetSampleProgress() if we can't return any on time.

Please note that for my sample implementation, I decided to create a stream that contains only one sample at a time. You may decide to create a complete stream read from a file, hence might have to seek to the beginning of the stream every time a sample is requested.

private void retrieveSampleThread()
{
    // seek to the beginning of the stream
    _frameStream.Seek(0, SeekOrigin.Begin);
    _frameStreamOffset = 0;    // Instantiate a Sample
    Sample _sample = null;

    // try to lock our object (basically the sampleBufferList)
    lock (this)
    {
        // check if our sampleBufferList is empty
        if (this.sampleBufferList.Count == 0)
        {
            // Release the object and try to reacquire
            if (!Monitor.Wait(this, this.timeout))
            {

                // We are busy buffering ...
                this.ReportGetSampleProgress(0);

                return;

            }
        }

        // therefore dequeue first Sample in the buffer
        _sample = this.sampleBufferList.Dequeue();

        // immediately notify a waiting thread in the queue
        Monitor.Pulse(this);

    }

    // write the retrieved Sample into the stream
    _frameStream.Write(_sample.sampleBuffer, 0, _frameBufferSize);

    MediaStreamSample mediaSample = new MediaStreamSample(
        _videoDesc,
        _stream,
        _offset,
        _count,
        _timeStamp,
        this.emptyDictionary);

    // Increment _currentTime
    // Notice that I multiply by 2. I cannot explain yet why
    _currentTime += (int)TimeSpan.FromSeconds((double)1 /_speed).Ticks * 2;
    // report back a successful Sample
    this.ReportGetSampleCompleted(mediaSample);

    return;
}

The sample in the code above is a class with two member variables Time and Buffer. I did not make any use of Time anywhere in the code. Buffer on the other hand is the actual byte stream already converted from RGB to RGBA.

sampleBufferList is of type Queue<Sample> which is a first-in, first-out collection of Sample (object). We first try to create a lock on our shared object, the sampleBufferList, so that can pop out a Sample. If the list is empty, we release and try to reacquire the lock on sampleBufferList and hopefully it may have a Sample already. If we cannot reacquire the lock over a specified period of time defined by the timeout, we simply inform MediaElement that we are busy buffering by calling the ReportGetSampleProgress() method.

However, if we make it through, we simply dequeue a Sample from our list and assign the dequeued Sample to our temporary variable _sample. We then write our Sample (byte array) into a stream (_frameStream) and instantiate and initialize a MediaStreamSample which accepts the media description, media stream, start position in the stream, size of the buffer in the stream, and the current time.

We need to increment _currentTime which according to the documentation is the time from the beginning of the media at which the sample should be rendered as expressed using 100 nanoseconds increment. Finally, report back to MediaElement that we are done, by calling the ReportGetSampleCompleted() method.

This thread will be created every time MediaElement calls for a new Sample which also means that the Sample has to have been created somewhere. We then ask how we go about generating our samples.

To answer that question, we need create a new method that will start a new thread responsible for retrieving, processing, and buffering our media samples as presented in the code below. But let us first create variables relevant to that method.

// flag to kill this thread
private static bool _done = true;

// Instantiate a background  Worker Thread to process Samples
private BackgroundWorker _worker = new BackgroundWorker();

// Full Path to the video file (.avi)
// Supply your own full path - please only avi files
public string _filepath = "Video_File_Full_Path_Here.avi";

// Represents the total number of frames in the video
int numFrames = 0;

// Byte Array for a single Sample (RGB format, hence * 3 bytes for each pixel)
private byte[] RGB_Sample = new byte[_frameHeight * _frameWidth* 3];

// Byte Array for a single frame (RGBA format)
private byte[] RGBA_Sample = new byte[_count];

// Set to true to rotate Sample to normal view
private bool _rotate = true;
Please change "Video_File_Full_Path_Here.avi" to your own file. It must be an .avi file and you must have the relevant codec installed on your computer for the media to be decompressed - otherwise the DLL will fail to extract frames or samples for that matter.

Let us go ahead and write our method responsible for generating our samples. This method will be called first after instantiating our derived class, however that functionality will be wired up later in Step 3.

// Method that retrieves a Sample from an AVI Stream
public Boolean startStreamThread()
{
    if (AutomationFactory.IsAvailable)
    {
        _done = false;

        //_worker.WorkerReportsProgress = true;

        _worker.DoWork += (s, ex) =>
        {
            // instantiate a COM Object (included in the zip file)
            // you have to register the COM using the command below:
            // regsvr32 aviDLL.dll
            // ProgID = myAVI.myAVIcls
            dynamic obj;
            obj = AutomationFactory.CreateObject("myAVI.myAVIcls");

            bool success;

            // openAVIFile: 
            success = obj.openAVIFile(_filepath);

            // getStream: 
            success = obj.getStream();

            // videoFPS: 
            // get the fps of the video
            _speed = obj.videoFPS();

            // if not specified, the default it to 30 frames per second
            if (_speed == 0)
                _speed = 30;

            int j = 0;
            int RGBByteCount = 3;
            int RGBAByteCount = 4;
            int pixelPos;

            // loop until the user explicitly stops
            while (!_done)
            {

                // getFrameRGBBits: returns a byte array
                // for a Sample of type RGB (3 bytes per pixel)
                // ignore the first parameter 0
                RGB_Sample= obj.getFrameRGBBits(0, _speed);

                j = 0;

                // here we first loop through each vertical lines from the bottom upwards
                // where each is _frameWidth * 3 bytes long
                //
                // The picture will appear correctly
                for (int verticalCount = RGB_Sample.Length - 1; 
                       verticalCount > -1; verticalCount -= _frameWidth * RGBByteCount)
                {
                    // next we loop through each pixel
                    // (3 bytes) of the vertical line (_frameWidth)
                    for (int horizontalCount = 0; horizontalCount 
                              < _frameWidth; horizontalCount += 1)
                    {
                        // Calculate the next pixel position from the original Sample
                        // based on the outer loop, it is calculated from bottom-right
                       pixelPos = verticalCount - (_frameWidth * RGBByteCount) + 
                         (horizontalCount * RGBByteCount) + 1;

                        RGBA_Sample[j] = RGB_Sample[pixelPos];
                        RGBA_Sample[j + 1] = RGB_Sample[pixelPos + 1];
                        RGBA_Sample[j + 2] = RGB_Sample[pixelPos + 2];

                        // Assign 1 byte for the Alpha Channel
                        RGBA_Sample[j + 3] = 0xFF;

                        //jump 4 bytes for the RGBA byte counter
                        j += RGBAByteCount;
                    }
                }

                // Instantiate and initialize a new Sample
                Sample localSample = new Sample();
                localSample.sampleBuffer = RGBA_Sample; // new RGBA Sample
                localSample.sampleTime = DateTime.Now; // Not used in this sample code

                lock (this)
                {
                    // if the buffer is full, remove one sample
                    if (this.sampleBufferList.Count == numberOfSamplesBuffer)
                    {
                        this.sampleBufferList.Dequeue();
                    }

                    // add a new sample to the buffer
                    this.sampleBufferList.Enqueue(localSample);

                    Monitor.Pulse(this);
                }

            }

            // call our COM object to releases any resources and close the AVI File
            obj.closeFrames();

        };        // start the thread
        _worker.RunWorkerAsync();

    }
    else
    {
        return false;
    }

    return true;
}

In our BackgroundWorker _worker's DoWork() method, we first instantiate our COM object using CreateObject of the System.Runtime.InteropServices.Automation.AutomationFactory with ProgID "myAVI.myAVIcls".

The details of this DLL are not part of this article as mentioned earlier - some links are provided above to help with opening AVI files and extracting frames, etc.

You need to call the exposed public function openAVIFile() of the COM object obj by passing the full path to the video file. The function will return a boolean to indicate success or failure.

Then call the function getStream() which really does not return any stream. It only opens the stream and returns success or failure. closeStream() does some cleanup in obj.

If you are interested in getting the number of frames in the video, use the getTotalNumberOfFrames() function.

We call obj's videoFPS() function to give us the frames per second (FPS) of the video which we assign to our variable _speed.

Then, spin our thread into a continuous loop to ensure our Samples buffer is always filled up with samples until the user explicitly stops - the variable _done will be set to true and the processing _worker thread will be stopped.

To get a sample from our video, we call the function getFrameRGBBits() and pass in two parameters. The first does not really do anything - I had intended to use it to return a specific frame. The second parameter tells the object how fast or slow is our playback; the higher the value, the slower the playback. In our case, we pass in the actual FPS of the video - this means the playback will be at that FPS. Experiment by changing the value you pass in for the second parameter.

This function returns a raw byte array representing our sample, uncompressed of type RGB. So basically this is three bytes per pixel. The sample that we pass back to the MediaElement stream is also uncompressed but of type RGBA which is 4 bytes per pixel. We therefore need to convert from RGB to RGBA by adding an extra byte to represent the Alpha channel.

If you directly assign the first byte from RGB_Sample to the first byte in RGBA_Sample, the image will turn out to be upside-down - at least which is what I get. To flip the image to its true orientation, we start assigning the last byte from RGB_Sample to the first byte of RGBA_Sample, and for every fourth byte of RGBA_Sample, we set the Alpha channel to 0xFF.

// assume N number of pixels in the original RGB Sample
// and M pixels in RGBA Sample
RGBA_Sample[Pixel 1] = frameBytes[Pixel N]
RGBA_Sample[Pixel 2] = frameBytes[Pixel N-1]
RGBA_Sample[Pixel 3] = frameBytes[Pixel N-2]
RGBA_Sample[Pixel 4] = 0xFF
...
RGBA_Sample[Pixel M - 3] = frameBytes[Pixel 3]
RGBA_Sample[Pixel M - 2] = frameBytes[Pixel 2]
RGBA_Sample[Pixel M - 1] = frameBytes[Pixel 1]
RGBA_Sample[Pixel M] = 0xFF

The sample code in the download includes the functionality for the flip checkbox added to the UI.

Step 3

Now, let's write some code that hooks up our UI to our custom derived class to complete our objective.

In our MainPage.xaml code-behind, we first instantiate our custom derived MediaStreamSource class. To bring everything into action, initialize our custom MediaStreamSource and call its public method startStreamThread() to open our video and start buffering such that when our MediaElement requests for the first sample (and subsequent ones), our derived object will be ready to satisfy those requests. Finally, set our custom MediaStreamSource object as the source of our media to the MediaElement and voila, our application is ready to render .avi videos.

// Instantiate our derived MediaStreamSource class
Classes.MyDerivedMediaStreamSource _mediaSource;

public MainPage()
{
    InitializeComponent();
    Loaded += new RoutedEventHandler(MainPage_Loaded);
}

void MainPage_Loaded(object sender, RoutedEventArgs e)
{
    OpenStream.Click += new RoutedEventHandler((s, ex) =>
    {
        // initialize our media stream object
        _mediaSource = new Classes.MyDerivedMediaStreamSource();

        if (_mediaSource.startStreamThread())
        {
            // set flag to true - media has been opened
            mediaOpen = true;

            // set the source of our media stream to the MediaElement
            mediaPlayer.SetSource(_mediaSource);
        }
    });

    CloseStream.Click += new RoutedEventHandler((s, ex) =>
    {
        if (mediaOpen)
        {
            mediaPlayer.Stop();
            _mediaSource.closeStream();
            _mediaSource = null;
            mediaOpen = false;
        }
    });

    chkFlip.Checked += new RoutedEventHandler((s, ex) =>
    {
        _mediaSource.flipped(true);
    });

    chkFlip.Unchecked += new RoutedEventHandler((s, ex) =>
    {
        _mediaSource.flipped(false);
    });
}

In the advent of Silverlight 5, we anticipate more power and control in the developer's hands, given the ability to call unmanaged code using P/Invoke from trusted Silverlight applications among others.

I hope this simple article was helpful to all.

Remember to register the DLL using regsvr32 before running the application.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here