Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

DirectShow Source Filter for Painting on DC

0.00/5 (No votes)
12 Sep 2004 1  
How to paint in DC in source filter

Sample Image - snake.gif

Introduction

Well, creating DirectShow filters can be real pain at times, especially Source filter. When I needed a source filter to generate my own images, I looked at the Ball source filter in DirectShow documentation, but it was useless for me because I needed to generate my own image using my own painting. So, here it is.

This filter lets you create your own DC and do all the painting in that DC. Everything painted on the DC is delivered to the downstream filters until finally renderer filters render the RGB data.

The filter is based on CSourceStream to create the output pin and deliver the data, and CSource to create the source filter.

The Flow

I am keeping this article really short as those who would need to create such a filter would know most about the DirectShow architecture and filters.

The only new thing in this filter is create the media type for RGB type, create the DC when stream thread is created, and fill the media sample with new buffer taken from DC.

Create Media Type

This source filter generates 24 bit RGB video. To create the media type, GetMediaType of CSourceStream is overridden. The filter graph calls this function before connecting the filters for connecting filter to agree on a media type (well, this is one of the functions called for connection apart from agreement of allocators, media type, and final acceptance of output pin to agree on connection). This function receives the CMediaType pointer on the output pin of source filter supplied by the input pin of the downstream filter. The output pin fills up the information of media type it's going to generate.

//
// GetMediaType
//
HRESULT CSnakeStream::GetMediaType(CMediaType *pMediaType)
{
    CAutoLock lock(m_pFilter->pStateLock());

    ZeroMemory(pMediaType, sizeof(CMediaType));

    // TODO: modify this option
    {
        VIDEOINFO *pvi = 
          (VIDEOINFO *)pMediaType->AllocFormatBuffer(sizeof(VIDEOINFO));
        if (NULL == pvi) 
            return E_OUTOFMEMORY;

        ZeroMemory(pvi, sizeof(VIDEOINFO));

        pvi->bmiHeader.biCompression    = BI_RGB;
        pvi->bmiHeader.biBitCount       = 24;
        pvi->bmiHeader.biSize           = sizeof(BITMAPINFOHEADER);
        pvi->bmiHeader.biWidth          = 320;
        pvi->bmiHeader.biHeight         = 240;
        pvi->bmiHeader.biPlanes         = 1;
        pvi->bmiHeader.biSizeImage      = GetBitmapSize(&pvi->bmiHeader);
        pvi->bmiHeader.biClrImportant   = 0;

        SetRectEmpty(&(pvi->rcSource)); // we want the whole image area rendered.
        SetRectEmpty(&(pvi->rcTarget)); // no particular destination rectangle

        pMediaType->SetType(&MEDIATYPE_Video);
        pMediaType->SetFormatType(&FORMAT_VideoInfo);
        pMediaType->SetTemporalCompression(FALSE);

        const GUID SubTypeGUID = GetBitmapSubtype(&pvi->bmiHeader);
        pMediaType->SetSubtype(&SubTypeGUID);
        pMediaType->SetSampleSize(pvi->bmiHeader.biSizeImage);

        m_bmpInfo.bmiHeader = pvi->bmiHeader;
    }

    return S_OK;
}

The Stream Thread

The OnThreadCreate() of source stream is called by the filter graph when filter graph comes into pause state. This function creates the thread to generate the data and later calls the FillBuffer to fill the data.

We override this function to do any initialization. In this sample filter, I draw a snake (kind of) which crawls from left to right in the middle of the screen. Here in this function, I initialize the height and width of the snake and number of blocks in the snake body.

The Painting Magic

Well, all the magic of painting in DC is actually nothing but creating a DIB section and selecting the DC for painting. From this DIB section, we pick the data and fill the media sample. All this is done in OnThreadCreate() override of CSrouceStream. The whole magic is as follows:

//
// OnThreadCreate
//
HRESULT CSnakeStream::OnThreadCreate(void)
{
    HBITMAP hDibSection = CreateDIBSection(NULL, 
       (BITMAPINFO *) &m_bmpInfo, DIB_RGB_COLORS, 
       &m_pPaintBuffer, NULL, 0);

    HDC hDC = GetDC(NULL);
    m_dcPaint = CreateCompatibleDC(hDC);
    SetMapMode(m_dcPaint, GetMapMode(hDC));

    HGDIOBJ OldObject = SelectObject(m_dcPaint,hDibSection);

    m_nScoreBoardHeight = m_bmpInfo.bmiHeader.biHeight/8;
    m_nSnakeBlockHeight = 4, m_nSnakeBlockWidth = 6;
    m_nNumberSnakeBlocks = 6;
    m_nSpaceBetweenBlock = 1;


    m_nLastX = 0;//(m_bmpInfo.bmiHeader.biWidth/2) - 
                 // (m_nSnakeBlockWidth+m_nSpaceBetweenBlock)*m_nNumberSnakeBlocks;
    m_nLastY = 0;//(m_bmpInfo.bmiHeader.biHeight/2);


    return CSourceStream::OnThreadCreate();
}

Send the Picture Downstream

This is simply straight forward. Once we have created an off-screen DC and have selected a DIB section in the DC, all we need to do is do whatever painting we need to do in the DC and just copy the DIB section buffer into the media sample buffer.

All this is done in the FillBuffer override of CSourceStream. The FillBuffer is called by the thread created in OnThreadCreate() whenever there is a request of data from upstream filters. This filter runs on 26 frames per second rate.

The code for filling media sample looks as:

//
// FillBuffer
//
HRESULT CSnakeStream::FillBuffer(IMediaSample *pSample)
{
    CAutoLock lock(m_pFilter->pStateLock());

    HRESULT hr;
    BYTE *pBuffer;
    long lSize;

    hr = pSample->GetPointer(&pBuffer);
    if (SUCCEEDED(hr))
    {
        lSize = pSample->GetSize();
        if( nFrameRate++ > 150 )
        {
            PatBlt(m_dcPaint,0,0,m_bmpInfo.bmiHeader.biWidth, 
                      m_bmpInfo.bmiHeader.biHeight,BLACKNESS);
            //PatBlt(m_dcPaint,0,m_bmpInfo.bmiHeader.biHeight-m_nScoreBoardHeight,
            //        m_bmpInfo.bmiHeader.biWidth,m_nScoreBoardHeight,WHITENESS);
            RECT rcSnake;
            for(int n=m_nNumberSnakeBlocks;n>=0;n--)
            {
                rcSnake.left = m_nLastX + (m_bmpInfo.bmiHeader.biWidth/2) 
                                                 - (m_nSnakeBlockWidth)*n;
                rcSnake.top = m_nLastY + (m_bmpInfo.bmiHeader.biHeight/2);
                rcSnake.right = rcSnake.left + m_nSnakeBlockWidth;
                rcSnake.bottom = rcSnake.top + m_nSnakeBlockHeight;
                FillRect(m_dcPaint,&rcSnake,CreateSolidBrush(RGB(0,255,0)));
                m_nLastX += m_nSnakeBlockWidth;

                if( m_nLastX+(m_nSnakeBlockWidth+m_nSpaceBetweenBlock)
                   *m_nNumberSnakeBlocks > m_bmpInfo.bmiHeader.biWidth )
                {
                    m_nLastX = -m_bmpInfo.bmiHeader.biWidth/2;
                }

                TRACE("%d %d %d %d\n",rcSnake.left,rcSnake.top,
                                 rcSnake.right,rcSnake.bottom);
            }
            nFrameRate=0;
        }

        m_llFrameCount++;

        CRefTime        m_rtStart;   // source will start here
        CRefTime        m_rtStop;    // source will stop here

        m_rtStart = m_llFrameCount*1000000/26;
        m_rtStop = (m_llFrameCount+1)*1000000/26;


        // Draw the current frame
        TCHAR szText[256];
        wsprintf( szText, TEXT("%s\0"), TimeToTimecode(m_rtStart));

        PatBlt(m_dcPaint,0,m_bmpInfo.bmiHeader.biHeight-25, 
               m_bmpInfo.bmiHeader.biWidth, 
               m_bmpInfo.bmiHeader.biHeight,BLACKNESS);

        SetBkMode(m_dcPaint,TRANSPARENT);
        SetTextColor(m_dcPaint,RGB(255,255,255));
        if( !TextOut( m_dcPaint, m_bmpInfo.bmiHeader.biWidth/2-50, 
                      m_bmpInfo.bmiHeader.biHeight-20, 
                      szText,_tcslen( szText ) ) )
            return E_FAIL;

        CopyMemory(pBuffer,m_pPaintBuffer,lSize);
    }

    return S_OK;
}

In the above code, we draw snake once in a second but we update the frame counter displace at every frame. CopyMemory copies the buffer from DIB section to the output media sample.

Be Careful

Well, please go through the code carefully before using it or taking any inspiration from it. There is not much code in the sample and I have not put in any checks or anything.

How to Use It

This is one of the good things about DirectShow. No pain using it (if it's working correct), just add the snake source filter from the list of DirectShow filters in Filter graph. Render the output pin. It's done.

More to Do in It

Well, as you see above, it looks very simple to create the filter but it is very very basic. In the practical scenario, you may need to support seeking on the filter. You may also need to generate the audio samples as well. Then you may need to support multiple time format for seeking and sync between audio and video samples generated.

Looking at the new Ball sample in DirectShow documentation which supports IMediaSeeking doesn't solve all the problems of multiple time formats and sync between audio and video.

Apart from all this, most of the source filters read from file and can be of push or pull based source filter, i.e., implementing IAsynReader or IFileSourceFilter. This can again be another complicated implementation.

I would try to post another article on how to deal with the above mentioned issues, meanwhile enjoy coding... cheers!

History

  • 12th September, 2004: Initial version

License

This article has no explicit license attached to it, but may contain usage terms in the article text or the download files themselves. If in doubt, please contact the author via the discussion board below. A list of licenses authors might use can be found here.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here