Introduction
This project uses DirectSound to play wave files, but using a single static buffer it takes a lot of time and CPU cycles to load sound to buffer. But using streaming buffers is one way to copy data in blocks and playing the same block, also updating the same block from time to time. Thus it is a better way of playing large sound files.
Using the code
This project uses DirectSound in an easy way and most of the manipulation of wave file is done in wavefile
class. The most important feature of it is that it uses Streaming Buffers. If you want to play a large wave file, loading it completely into the buffer would take up a lot of time to initialize. A better way would be to load it step by step into a circular buffer in a timely manner. I have used mmio
functions defined in mmsystem.h. In this class, simple way of parsing wave header is done by finding the RIFF chunk by setting its type as mmioFOURCC('W','A','V','E')
and descending into it to find its FMT chunk. Then wave format is read from this FMT chunk and stored in WAVEFORMATEX
structure. After this I dived into data chunk. Data chunk is not read in one go. This is where all the importance comes in. A ServiceBuffer
is used to service primary buffer to keep it full most of the time. This is called by timer proc through AudioStream
member function TimerCallback
. A primary buffer can be used of any size, but it must be serviced quite a number of times while playing one full buffer length. The main aim is to keep this buffer as much full as possible.
class Timer { public: Timer(); ~Timer();
BOOL Create (UINT nPeriod, UINT nRes,
DWORD dwUser, int (pfnCallback)(DWORD));
public: static void CALLBACK TimeProc(UINT uID,
UINT uMsg, DWORD dwUser, DWORD dw1, DWORD dw2);
int (*m_pfnCallback)(DWORD);
DWORD m_dwUser; UINT m_nPeriod;
UINT m_nRes;
UINT m_nIDTimer;
UINT UID;
void TimerStop(void); };
m_nTimeStarted = timeGetTime ();
m_ptimer = new Timer ();
if (m_ptimer) {
m_ptimer->
Create (m_nBufService, m_nBufService, DWORD(this), TimerCallback);
}
This TimerCallback
function calls the service buffer at specified intervals. There are play cursors and write cursors at 15 millisecond difference. DirectSound does not allow you to write data in that position. So better maintain your own variable to keep information about the buffer memory played and to be played.
BOOl AudioStream::SereviceBuffer() {
if (InterlockedExchange (&lInService, TRUE) == FALSE)
{ . . . . . . . . .
. . . . . . .
DWORD dwFreeSpace = GetMaxWriteSize();
dwDataremaining=m_pwavefile-> GetNumbBytesRemaining();
Write wave data;
. . . . . . . . .
else fRtn=FALSE; }
else {
fRtn = FALSE; }
return (fRtn); }
<![if !supportEmptyParas]> <![endif]>
Problems I faced
The main problems I faced while making this project was reading the wave files. There was always an error while copying the contents of PCMWAVEFORMAT
(16 Bytes) structure to WAVEFORMATEX
(18 Bytes) structure. The worst thing that happened was I compiled this part of program in another file and memcpy
was working quite well but not here. So I had to allocate 18 bytes to WAVEFORMATEX
structure variable and then fill up the structure one by one.
One more strange effect I noticed, when I had put the dsound.h header file before mmsystem.h. So if you get syntax error in header file, try putting mmsystem.h before dsound.h. The original idea I got from the MSDN web site in a tutorial about streaming buffers. There I could not use the TIMERCALLBACK
, so I used old C style function passing to another function.
History