|
There is a function in the fourier source code:
double Index_to_frequency(unsigned int p_nBaseFreq, unsigned int p_nSamples, unsigned int p_nIndex);
I have not used this function, so I am not sure if the following is correct. But I beleive that you could use it as follows. Base Frequency recorded at 44100Hz, using 2048 samples. You could do something like:
#define FFT_LEN 2048
#define BASE_FREQUENCY 44100
for (int i=0; i < FFT_LEN/2; i++)
{
if (Index_to_frequency(BASE_FREQUENCY, FFT_LEN, i) < 100.0)
{
my100HzArray[i].real = fftReal[i];
my100HzArray[i].imag = fftImag[i];
}
}
inline double GetFrequencyIntensity(double re, double im)
{
return sqrt((re*re)+(im*im));
}
#define mag_sqrd(re,im) (re*re+im*im)
#define Decibels(re,im) ((re == 0 && im == 0) ? (0) : 10.0 * log10(double(mag_sqrd(re,im))))
#define Amplitude(re,im,len) (GetFrequencyIntensity(re,im)/(len))
#define AmplitudeScaled(re,im,len,scale) ((int)Amplitude(re,im,len)%scale)
Nothing is impossible, It's merely a matter of finding an answer to the question of HOW? ... And answering that question is usually the most difficult part of the job!!!
|
|
|
|
|
Hello again
u speak of discarting frequentie bands. How can u do it in ur code.
So my question is actually how can i rewrite the code so i get only
the frequenties from 0 to 100 hz on the screen in de graphbox. If i
connect a sinus of 5 hz to my wave in I see on my screen a very little
bar in the box I would like it to be bigger so i can see it more clearly.
And how can i see an enlarged version of a signal that is only 1 mm on the screen
i need to see it bigger
Thx 4 ur help
Grtz
|
|
|
|
|
pietn321 wrote: u speak of discarting frequentie bands
You could do something like:
int nSamplingHertz = 44100;<br />
const int FFT_LEN = 2048;<br />
int nBandwidth = nSamplingHertz/FFT_LEN;<br />
for (int nBand =0; nBand < nBandwidth; n++)<br />
{<br />
myArray[nBand].real = re[nBand];<br />
myArray[nBand].imaginary = im[nBand];<br />
}
pietn321 wrote: And how can i see an enlarged version of a signal that is only 1 mm on the screen
i need to see it bigger
You would just change the drawing code to make the signal display wider. That would of course mean that you would pass the drawing code less frequency bands, which would be accomplished by removing unnessacary frequency bands as shown above.
Nothing is impossible, It's merely a matter of finding an answer to the question of HOW? ... And answering that question is usually the most difficult part of the job!!!
-- modified at 10:03 Thursday 13th April, 2006
|
|
|
|
|
I really like this application but for some reason I see the bars moving even when there is no audio playing. Anyone knows why the application still shows some 'activity' even before/after playing the audio?
thank you.
|
|
|
|
|
You're probably seeing the background noise from your PC. I'm guessing because the input data is normalized before for the FFT so if you are looking at -85dB crackling it will look huge if there is no other audio at the input.
In other words, from what I've seen it's normal (4 different machines, different models and sound cards).
=^ D}
|
|
|
|
|
Hi,
I have a project which i am unable to do myself.It is not a very long project but because of the time constraint,I would like it to be completed at most by the 7th of November.
The specifications of the project are as follows:
I actually am developing a hearing aid and now the problem is in real time implementation of it.
1. I have to capture speech coming in through "mic in" port of the computer.
2. The captured speech would pass through a module(consisting of filters and Fourier transforms)..which i have written below.
3. The output through my module has to be played back out through the "mic out" port of the computer.
Code is :
#include <stdio.h>
#include <math.h>
#include <string.h>
double wdata[13]; //required for the FFT
double datafft[8192]; //length required is twice the inverse FFT
double x[295000]; //input data vector,maximum,length
int expo = 12; //required for the FFT
int main ()
{
double pi,ex,temp,theta1,coefa,coefb,G;
double X2,W1,bf,phikb,phia,phib;
double phi[1260],XR[4096],XI[4096],X[4096],XM[4096];
double w[512]; // vector of hamming window values
double ret[4096]; // vector of the inverse FFT returned data
double x1[512],x2[512],f[512];
double G1;
int a,i,j,k,m,n,FL,NW,L,l,q1,q2,kb,ks,ke;
double sr = 25000.0; // sampling rate
int CF = 15; // compression factor,CF:1
int hfru = 10500; // high frequency range upper limit
int hfrl = 4400; // high frequency range lower limit
int N = 2048; // number of frequency samples,top half
int ND = 512; // length of input data
int M = 4096; // length of inverse FFT
FILE *fp;
fp = fopen("ELPHRG01.WAV","r+b");
i = 0;
*/
while((fscanf(fp,"%f/n",&x[i++])) != EOF);
fclose(fp);
FL = i;
pi = 4.0 * atan(1.0);
ex = exp(1.0);
NW = 512;
W1 = 0.5;
l = 0;
L = 1;
for (n=0; n < NW; n++) {
theta1 = (2 * pi * n) / (NW - 1);
w[n] = W1 - (W1 * cos(theta1));
}
for (n = 0; n < NW; n++) {
x2[n] = 0.0;
}
bf = sr / (2 * CF); // break frequency
phikb = (pi * bf) / (sr / 2); // angle of break frequency
kb = N / CF; // number of samples from DC to break
phia = (hfrl / (sr /2 )) * pi; // high frequency region starting angle
phib = (hfru / (sr /2 )) *pi; // high frequency region ending angle
coefa = 0.00799;
coefb = -0.3722072;
//1. Obtain a windowed Segment of the input data
for (a = 1; a < 2 * FL / NW + 1; a++) {
for (m = 0, n = 1; m < NW; m++, n++) {
f[m] = w[m] * x[n];
}
}
//2. Obtain Nonlinearly Compressed and Transpossed frequency samples
for (k = 0; k < 150; k++) {
temp = coefa * k;
phi[k] = pow(ex, temp) + coefb;
G = pow(ex, temp);
G1 = G * sqrt(G) + 6;
XR[k] = 0.0;
XI[k] = 0.0;
for (n = 0; n < ND; n++) {
XR[k] = XR[k] + f[n] * cos(phi[k] * n);
XI[k] = XI[k] + f[n] * sin(phi[k]*n);
}
XR[k] = G1 * XR[k];
XI[k] = G1 * XI[k];
}
//3. Pad zeros in the center of the Spectrum
for (k = 150; k < M - 150; k++) {
XR[k] = 0.0;
XI[k] = 0.0;
}
//4. Form the Complex conjugate part of the spectrum
for (k = M - 150 + 1; k < M + 1; k++) {
XR[k] = XR[M-k];
XI[k] = -XI[M-k];
}
//5. Convert from Polar to rectangular for the inverse FT
for (i = 0; i < M; i++) {
j = 2 * i;
k = j + 1;
datafft[j] = XR[i];
datafft[k] = XI[i];
}
//6. Call the FFT
f_init(); //I need to implement this FFT function in C++ rather than Foxpro
MAIN_();
//7. Obtain the inverse FFT from the FFT call
ret[0] = datafft[0] / M;
for (i = 2244; i < 2756; i++) {
j = 2 * i;
k = 2 *M - j;
ret[i+1] = datafft[k] / M;
}
for (k = 0; k < 512; k++) {
ret[k] = ret[k+2244];
}
// Multiply by the hamming
q1 = a / 2;
q2 = q1 * 2;
if (q2 == a) {
for (i = 0; i < 256; i++) {
X2 = ret[i] * w[i] + x1[i + 256];
printf("%f\n",X2);
}
for (i = 256; i < 512; i++) {
x2[i] = ret[i] * w[i];
}
} else {
for (i = 0; i < 256; i++) {
X2 = ret[i] * w[i] + x2[i + 256];
printf("%f\n",X2);
}
for (i = 256; i < 512; i++) {
x1[i] = ret[i] * w[i];
}
}
l = (NW * L / 2);
L = L+1;
return 0;
}
MAIN_()
{ //This is again using Fox Pro but i am not sure
fft2c_(datafft,&expo,wdata);
return 0;
}
So,in all i just need code development in C++ to capture speech and play it back and a slight modification in my module to make it compatible.
If somebody can please help me out,I am even willing to pay for the project.
Thanks,
Student
|
|
|
|
|
Can anyone show how to modify the code, to preform FFT on wave out audio signal?
|
|
|
|
|
In have been visiting nearly everyday. To see if anyone/someone could answer (WaveOUT), because I also wants to know! - Is this CodeProject dead?
|
|
|
|
|
The WaveOUT process is the same with the WaveIN, just the audio data source is different, this article just show the FFT of audio signals.
good lucks.
|
|
|
|
|
It's easy job,just put the code Process() to before waveOutPrepareHeader
|
|
|
|
|
How do you use reverse FFT ?
When you do this :
fft_double(FFT_LEN,0,fin,NULL,fout,foutimg);
you get real and imaginary part of signal in fout[] and foutimg[].
Should the reversal look like this ?
fft_double(FFT_LEN/2,1,fout,foutimg,fin,NULL);
Will the reverse FFT return an imaginary part?
Thank's!
Peter
|
|
|
|
|
piotreq wrote: When you do this :
fft_double(FFT_LEN,0,finBefore,NULL,fout,foutimg);
you get real and imaginary part of signal in fout[] and foutimg[].
Should the reversal look like this ?
fft_double(FFT_LEN/2,1,fout,foutimg,finAfter,NULL);
Will the reverse FFT return an imaginary part?
Its been a long time since I looked at this code so forgive me if my memory is a bit rusty. Your code:
fft_double(FFT_LEN,0,finBefore,NULL,fout,foutimg);
will take FFT_LEN samples from fin, and return exactly that many samples of real data in fout, and exactly the same amount in foutimg. Therefore if you want to do the inverse fft, sometimes shortened to IFFT Then you need to pass the fft the real and imaginary parts of the signal, along with the number of samples in each. Then you will get a real part result with the original signal. The imaginary portion of the output will simply be zero so you can pass null for it. So instead of:
fft_double(FFT_LEN/2,1,fout,foutimg,finAfter,NULL);
It should be:
fft_double(FFT_LEN,1,fout,foutimg,finAfter,NULL);
And if the code is working correctly, finAfter should equal finBefore...
Hope that helps... sorry for the late reply...
Nothing is impossible, It's merely a matter of finding an answer to the question of HOW? ... And answering that question is usually the most difficult part of the job!!!
|
|
|
|
|
ı can get sound from sound card of my pc by using waveInxxx api.but ı want to get this sample value of this sound and want to show my listbox..
|
|
|
|
|
Hi,
I've written a renderer filter which produces a spectral analysis, the output is fine when the graph is:
[source filter]->[wave parser]->[my filter]
but in this case there is no sound, which is of little use.
When the graph looks like:
[source]->[wave parser]->[Tee filter]->[my filter]
....................................->[default directsound dev]
this provides both output and analyser but the analyser is jerky, is there anyway to make it run a little smoother?
The oscilloscope filter from the SDK also suffers from the same problem.
|
|
|
|
|
hello. I've got a problem writting up algorithm for FFT. I've got a 26 seconds length of audio signal, samplae at 22050Hz, 8 bits, the frequency increases as time increases. now, what am I trying to do is to extract the frequency at each second, e.g. - frequency at 1 sec, 2 sec, 3 sec, and so forth. here is my coding. but this only displayed frequency at 1 second.
fs = 22050; % Sampling frequency.
bits = 8; % number of bits per sample, 256 levels.
[x,fs,bits]=wavread('SS1stE01.wav');
%FFT
N = 32768;
yf = fft(x,N);
resolution = fs/N;
f = resolution*(1:N/2);
y=yf(1:N/2);
plot(f,abs(y));
title ('Frequency content of SS1stE01.wav');
xlabel('Frequency(Hz)');
ylabel('Amplitude');
thank you.
|
|
|
|
|
I have been waiting for this, but it doesn't do anything and same situation is here. I have downloaded the real play, but where “u should set your recording source to "stereo mix" or "mono mix" in recording control” ?
Agha Khan
|
|
|
|
|
Sorry for the late reply. But I have extremely busy with other projects. Often working 20 hours a day. To anwser your question, Yes you have to go the recording properties of your soundcard, and specifically select the mono/stereo mix or wave mix option in the recording section of your soundcard. Also you must have the wave playback not set to mute or else that will override the selection of the recording properties.
Nothing is impossible, It's merely a matter of finding an answer to the question of HOW? ... And answering that question is usually the most difficult part of the job!!!
|
|
|
|
|
Thanks friend. This is an excellent stepping stone for me. I was trying to learn how to use the FFT to make Spectrum Analysers
Amit
|
|
|
|
|
Is anyone know how to solve this problem. I have just downloaded your code and executed in my platform. My Visual C++ is VC++.NET and it has below errors. I think it is a very minor error but I can't solve it easily. Help me. Thanks in advence.
c:\waveInFFT_demo\SpectrumGraph.cpp(169): error C2668: 'log10' : ambiguous call to overloaded function
Have a good day.
|
|
|
|
|
log10 function is expecting a float or double value, but complier is confused about what type of value it should consider.
Replace it from
y=log10(x/rct.Width());
to
float fValue = x/rct.Width();
y=log10(fValue);
It will work fine.
Any more question?
Agha Khan
|
|
|
|
|
Has anyone written any filters for directshow in Borland C++ builder 6? Specifically a Spectrum analyser or anything else that uses audio data from a file (not waveIn). I am writing a spectrum analyser filter and would appreciate some pointers!
|
|
|
|
|
This is rather interesting Could care a bit to explain how would one use this for example to detect DTFM tones with it? Seems like a nice idea that poped up in my head
|
|
|
|
|
This application is super!!!
|
|
|
|
|
I am having trouble writing audio spectrum analyzer. The output doesn't comes out right. I have the data: 16bits/Sample, Sampling Rate 44100Hz, Channels 2
what I am doing to get the data from Audio Buffer to my array (to be passed to FFT algorithm) is:
for (DWORD i=0,int j=0;j<(mBufferSize); i=i+2,j++)
{
buff = (short)AudioBuffer[i];
Buffer1[j] = buff;
}
Then I am passing the array (type: double) Buffer1 to FFT algorithm, written by Don Cross.
After that I calculate the magnitude of the frequencies that I need by using formula
magitude = sqrt(real[i]*real[i] + img[i]*img[i]);
The output just isn't coming out right, like winamp's & windows media player's spectrum analyzers? Does anyone knows how to do it, or link to a url etc that describes it.
Thanks a LOT in advance
Atif
|
|
|
|
|
Your code looks fine but remember that the magnitude of the frequency may need to be scaled before you can use it to render with. And the scaling factor you use changes the results of the spectrum. In my code I use magnitude/=256 to scale the output to something I can render.
As far as Winamp and Windows Media Player goes, The spectrum analyzer in windows media player does not display the same results as the default one in winamp. Secondly I tested two other spectrum analyzers in winamp and they display different results from each other. Both of which are totally different from the defualt spectrum analyzer. All four spectrum analyzers display different results than my sample application does.
One possible reason for this is that I use a 512 point FFT while Winamp uses a 576 point FFT. I do not know what point Windows Media Player uses but it is probably something different. This creates slightly altered views of the frequency spectrum. The second reason is scaling. You could scale using the mod, div, or xor functions each of which will create a different scaled view of the same spectrum. Which of these winamp uses, I am not sure. Finally the drawing code itself is different. For example my application uses the following to display the spectrum:
<br />
Rectangle(dc,0,0,rct.Width(),rct.Height());<br />
for (x = 1; x < rct.Width(); x ++)<br />
{<br />
y=log10(x/rct.Width());<br />
MoveToEx(dc,x,(y*rct.Height()+rct.Height()),NULL);<br />
LineTo(dc,x,(y*rct.Height() + rct.Height() - ((m_dArray[int((double(m_nLength)/double(rct.Width()))*x)]/(m_nMaxValue-m_nMinValue))*rct.Height())));<br />
}
Though I used to use:
<br />
Rectangle(dc,0,0,rct.Width(),rct.Height());<br />
for (x = 1; x < rct.Width(); x ++)<br />
{<br />
y=log10(x/rct.Width());<br />
MoveToEx(dc,x,(y*rct.Height()+rct.Height()),NULL);<br />
LineTo(dc,x,(y*rct.Height() + rct.Height() - m_dArray[int((double(m_nLength)/double(rct.Width()))*x)]));
}
while winamp's sample visualization uses:
<br />
Rectangle(memDC,0,0,576,256);<br />
for (y = 0; y < this_mod->nCh; y ++)<br />
{<br />
for (x = 0; x < 576; x ++)<br />
{<br />
MoveToEx(memDC,x,(y*256+256)>>(this_mod->nCh-1),NULL);<br />
LineTo(memDC,x,(y*256 + 256 - this_mod->spectrumData[y][x])>>(this_mod->nCh-1));<br />
}<br />
}
And the code for my EQ-style spectrum analyzer is:
<br />
dRangePerStep /= GetNumberOfSteps();<br />
for (WORD w = 0; w < nBars; w++)<br />
{<br />
stepRect.top = rct.bottom;<br />
int tot=0,nLargest=0;<br />
for (int i=pos; i<pos+nDiv; i++)<br />
{<br />
tot=tot+m_dArray[i];<br />
if (m_dArray[i] > nLargest)<br />
nLargest = m_dArray[i];<br />
}<br />
tot /= (m_nMaxValue-m_nMinValue);<br />
tot *= 2;<br />
TRACE("Total=%d Div=%d, Bars=%d dRange=%f Max=%d Min=%d Largest=%d\n",tot,nDiv,nBars,dRangePerStep,m_nMaxValue,m_nMinValue,nLargest);<br />
{<br />
stepRect.left = (rct.left+((xRight*(w+1))-xRight));<br />
stepRect.right = (xRight*(w+1));<br />
stepRect.bottom = stepRect.top;<br />
stepRect.top = rct.bottom - int(tot);<br />
CBrush br1;<br />
if (tot > m_nHighLevel)<br />
br1.CreateSolidBrush(GetHighColor());<br />
else if (tot > m_nMediumLevel)<br />
br1.CreateSolidBrush(GetMediumColor());<br />
else<br />
br1.CreateSolidBrush(GetLowColor());<br />
dc.FillRect(&stepRect,&br1);<br />
if (m_bGrid)<br />
dc.FrameRect(&stepRect,&br);<br />
<br />
stepRect.bottom = stepRect.top;<br />
stepRect.top = 0;<br />
}<br />
pos+=nDiv;<br />
}
I do not know what rendering algorthims the other three spectrum analyzers use (two from Winamp, and one from Windows Media Player). Anyways, I hope this helps you understand why you are getting different results using the same audio input, and why even Winamp does not agree with its own spectrum analyzers.
Nothing is impossible, It's merely a matter of finding an answer to the question of HOW? ... And answering that question is usually the most difficult part of the job!!!
|
|
|
|
|