Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / desktop / Win32

Play Wave Files with DirectSound and Display its Spectrum in Real Time - Part 2

4.03/5 (20 votes)
21 May 2009CPOL3 min read 529.1K   6.5K  
An article to show how to play a Wave file with DirectSound and display its spectrum in real time.

Introduction

This article is an improved edition of my article: Play Wave file with DirectSound and Display its Spectrum in Real Time. In this article, I reference Sun’s JDK source code, YoYoPlayer, NewAC, KJ DSP, and so on. And, some code comes from there too.

The article includes four parts:

  1. Multi-threading
  2. Fast Fourier Transforms (FFT)
  3. A Direct Sound wrapper (comes from JDK)
  4. Using the Win32 GDI API for spectrum drawing

The latest version about this series can be found at this link: Play Audio Files with DirectSound and Display its Spectrum in Real Time - Part 3.

Multi-Threading

In this article, I use two threads, one to manage sound play, and the second to manage audio data for sample analysis. The class CThread is the father class of all child classes. Another important thing is, I declare those two threads as friend classes of CBasicPlayer; with this declaration, I can use the private or protected members of CBasicPlayer in the threads. First, let us see the sound play thread. In this thread, I call DAUDIO_GetDirectAudioDeviceCount to get all the available output devices and cache them in memory, then call DAUDIO_Open to open the first (default) output device for output, call DAUDIO_Start to start output. See this code below:

C++
void CPlayThread::Execute()
{
    if(m_Player == NULL)
        return;

    if(m_Stop == TRUE)
        return;

    const DWORD buffersize = 16000;
    SetFilePointer(m_Player->GetFileHandle(), 44, NULL, FILE_BEGIN);
    INT32 count = DAUDIO_GetDirectAudioDeviceCount();

    // wait time = 1/4 of buffer time
    DWORD waitTime = (DWORD)((m_Player->m_BufferSize*1000.0F)/
                             (m_Player->m_SampleRate*m_Player->m_FrameSize));
    waitTime = (DWORD)(waitTime / 4);
    if(waitTime<10) waitTime = 1;
    if(waitTime>1000) waitTime = 1000;

    m_Player->m_info = (DS_Info*)DAUDIO_Open(0, 0 , TRUE, DAUDIO_PCM,
                        m_Player->m_SampleRate, m_Player->m_BitPerSample,
                        m_Player->m_FrameSize, m_Player->m_Channels, TRUE,
                        FALSE, m_Player->m_BufferSize);
    m_Player->m_bytePosition = 0;

    if(DAUDIO_Start((void*)m_Player->m_info, TRUE))
    {
        m_Player->m_SpectrumAnalyserThread->Resume();
        printf("start play ...\n");
        char buffer[buffersize];
        while(!m_Stop)
        {
            DWORD dwRead;
            if(ReadFile(m_Player->GetFileHandle(), (void*)buffer,
                        buffersize, &dwRead, NULL) == FALSE)
                break;

            if(dwRead <= 0)
                break;

            DWORD len = dwRead;
            DWORD offset = 0;
            DWORD written = 0;

            /*
            * in this loop, the data may not be written to device one time,
            * maybe more than one time. So, we need this loop to process it.
            */
            while(TRUE)
            {
                m_cs->Enter();
                int thisWritten = DAUDIO_Write((void*)m_Player->m_info,
                                                buffer+offset, len);
                if(thisWritten < 0) break;
                m_Player->m_bytePosition += thisWritten;
                m_cs->Leave();

                len -= thisWritten;
                written += thisWritten;
                if(len > 0)
                {
                    offset += thisWritten;
                    m_cs->Enter();
                    Sleep(waitTime);
                    m_cs->Leave();
                }
                else break;
            }

            //copy audio data to audio buffer
            //for audio data synchronize
            DWORD pLength = dwRead;
            jbyte* pAudioDataBuffer = pSpectrum->GetAudioDataBuffer();
            if(pAudioDataBuffer != NULL)
            {
                int wOverrun = 0;
                int iPosition = pSpectrum->GetPosition();
                DWORD dwAudioDataBufferLength = pSpectrum->GetAudioDataBufferLength();
                if (iPosition + pLength > (int)(dwAudioDataBufferLength - 1)) {
                    wOverrun = (iPosition + pLength) - dwAudioDataBufferLength;
                    pLength = dwAudioDataBufferLength - iPosition;
                }

                memcpy(pAudioDataBuffer + iPosition, buffer, pLength);
                if (wOverrun > 0) {
                    memcpy(pAudioDataBuffer, buffer + pLength, wOverrun);
                    pSpectrum->SetPosition(wOverrun);
                } else {
                    pSpectrum->SetPosition(iPosition + pLength);
                }
            }
        }

        m_Player->m_SpectrumAnalyserThread->Stop();
        DAUDIO_Stop((void*)m_Player->m_info, TRUE);
        DAUDIO_Close((void*)m_Player->m_info, TRUE);
        m_Player->m_bytePosition = 0;

        printf("stop play.\n");
    }

    m_Player->m_info = NULL;
}

In the sound play loop, first we read the audio data into a buffer, and then write it to the output device, then copy the audio data to a data buffer for audio data synchronization. Now, let us see the sample data analysis thread. The source code for this comes from YoYoPlayer, from AudioChart.java and KJDigitalSignalProcessingAudioDataConsumer.java. The only work done was to transform the Java code to C++ code. The most important point is the class CSystem, which gets the system time in nanoseconds for accurate position calculations. Its source code comes from Sun’s JDK source code, and I did modify it to fit this application. The class CSystem is defined below:

C++
typedef __int64                jlong;
typedef unsigned int        juint;
typedef unsigned __int64    julong;
typedef long                jint;

#define CONST64(x)                (x ## LL)
#define NANOS_PER_SEC            CONST64(1000000000)
#define NANOS_PER_MILLISEC        1000000

jlong as_long(LARGE_INTEGER x);
void set_high(jlong* value, jint high);
void set_low(jlong* value, jint low);

class CSystem
{
private:
    static jlong frequency;
    static int ready;

    static void init()
    {
        LARGE_INTEGER liFrequency = {0};
        QueryPerformanceFrequency(&liFrequency);
        frequency = as_long(liFrequency);
        ready = 1;
    }
public:
    static jlong nanoTime()
    {
        if(ready != 1)
            init();

        LARGE_INTEGER liCounter = {0};
        QueryPerformanceCounter(&liCounter);
        double current = as_long(liCounter);
        double freq = frequency;
        return (jlong)((current / freq) * NANOS_PER_SEC);
    }
};

You just call CSystem::nanoTime() to get the current system time in nanoseconds. It's accurate!

Fast Fourier Transform – FFT

FFTs play an important role in digital signal processing (DSP). The class CFastFourierTransform implements it; its source code comes from KJFFT.java. The only work done here is code transformation. And, it is easy to use. Go here for a detailed theory of the FFT algorithm.

Direct Sound Wrapper

The whole source code comes from Sun’s JDK source code. In early days, I found an open source code project – YoYoPlayer, which can play sound files like MP3s, Wav, OGG, and so on. When I went deep inside the source code, I found the code call native code (Win32 code) to play sounds. And, I found the native code in the JDK source code. The wrapper includes some functions to maintain sound play with Direct Sound, so this application needs DirectX 6 or higher.

Draw a Spectrum with the Win32 GDI API

The source code partly comes from YoYoPlayer. See the source code below:

The first step is to compute sample data with FFT, then do some work with FFT result.

C++
void CSpectrumAnalyser::Process(float pFrameRateRatioHint)
{
	if(IsIconic(m_Player->m_hWnd) == TRUE)
		return;

	for (int a = 0; a < m_SampleSize; a++) {
		m_Left[a] = (m_Left[a] + m_Right[a]) / 2.0f;
	}

	float c = 0;
	float pFrrh = pFrameRateRatioHint;
	float* wFFT = m_FFT->Calculate(m_Left, m_SampleSize);
	float wSadfrr = m_saDecay * pFrrh;
	float wBw = ((float) m_width / (float) m_saBands);

	RECT rect;
	rect.left = 0;
	rect.top = 0;
	rect.right = rect.left + m_winwidth;
	rect.bottom = rect.top + m_winheight;
	FillRect(m_hdcMem, &rect, m_hbrush);

	for (int a = 0,  bd = 0; bd < m_saBands; a += (INT)m_saMultiplier, bd++) {
		float wFs = 0;
		for (int b = 0; b < (INT)m_saMultiplier; b++) {
			wFs += wFFT[a + b];
		}

		wFs = (wFs * (float) log(bd + 2.0F));

		if(wFs <= 0.01F)
			wFs *= 10.0F;
		else
			wFs *= PI; //enlarge PI times, if do not, 
				  //the bar display abnormally, why??

		if (wFs > 1.0f) {
			wFs = 1.0f;
		}

		if (wFs >= (m_oldFFT[a] - wSadfrr)) {
			m_oldFFT[a] = wFs;
		} else {
			m_oldFFT[a] -= wSadfrr;
			if (m_oldFFT[a] < 0) {
				m_oldFFT[a] = 0;
			}
			wFs = m_oldFFT[a];
		}

		drawSpectrumAnalyserBar(&rect, (int) c, m_height, 
				(int) wBw - 1, (int) (wFs * m_height), bd);
		c += wBw;
	}

	BitBlt(m_hdcScreen, 2, 41, m_winwidth, m_winheight, m_hdcMem, 0, 0, SRCCOPY);
}

The next step is to draw the spectrum bar, in this part I draw a gradient in memory bitmap when instance CSpectrumAnalyser, here, calls BitBlt copy memory bitmap and draws to another memory bitmap.

C++
void CSpectrumAnalyser::drawSpectrumAnalyserBar(RECT* pRect, int pX,
                   int pY, int pWidth, int pHeight, int band)
{
    /* draw gradient bar */
    BitBlt(m_hdcMem, pX, pY-pHeight, pWidth, pHeight, 
			m_hdcMem1, pX, pY-pHeight, SRCCOPY);

    if (m_peaksEnabled == TRUE) {
        if (pHeight > m_peaks[band]) {
            m_peaks[band] = pHeight;
            m_peaksDelay[band] = m_peakDelay;
        } else {
            m_peaksDelay[band]--;
            if (m_peaksDelay[band] < 0) {
                m_peaks[band]--;
            }
            if (m_peaks[band] < 0) {
                m_peaks[band] = 0;
            }
        }

        RECT rect = {0};
        rect.left = pX;
        rect.right = rect.left + pWidth;
        rect.top = pY - m_peaks[band];
        rect.bottom = rect.top + 1;
        FillRect(m_hdcMem, &rect, m_hbrush1);
    }
}

When you want to speed up or slow down the spectrum display, change DEFAULT_FPS defined in the file BasicPlayer.h. And, increase or decrease the spectrum bar count, and change DEFAULT_SPECTRUM_ANALYSER_BAND_COUNT defined in BasicPlayer.h. Many parameters are waiting for you to find them out.

Existing Problems

An existing problem with the code in this article is some frequencies are not processed. The article Multimedia PeakMeter control shows how to process frequencies that you want by using pre-defined frequencies.

Modifications

  • Added a class CSpectrumAnalyser, separated some code in CBasicPlayer
  • Changed data type of audio buffer from BYTE (unsigned char) to signed char (jbyte), the spectrum seems to turn to normal
  • Added gradient bar to display spectrum

History

  • 2008-12-01: First version release on The Code Project
  • 2008-12-03: Fixed some bugs

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)