|
well you could sort and compare the first and last items but that would be slower than doing an iterative comparison.
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
|
c#_keithy wrote: Any ideas would be greatly appreciated
Can't you just use PathCommonPrefix() ?
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
hi i am student.im doing research about development of adaptive digital notch filter for the removal of 50Hz power line noise in ECG.i have a sample of 50Hz noise of ECG..now i need to call the file in mathlab..how must i do it?im very new to mathlab..please help me..
|
|
|
|
|
Probally the best place where put this question is on the mathworks forum.
However to load that file you have write some code , look at functions like fopen, fclose and fread..
Is it a text file or binary? do you know how it is written? Sometimes some functions are yet implemented: if it is a .wav the functions are ready in matlab.
Russell
|
|
|
|
|
the file is in decimal(negative and positive)..the noise signal is created in excell..should i use the fopen,fclose and fread for this type of file?
|
|
|
|
|
ashwiny wrote: should i use the fopen,fclose and fread for this type of file?
Of course you can.
with that functions you can open every file type, the problem can be if you know the file format, but this time I'm thinking that it is an ASCII file so there isn't problems.
You will need to functions like fscanf to read numbers from file.
All this functions can be used also in a simple C++ program, but in matlab you can find ready filters and display tools.
Russell
|
|
|
|
|
To get your data from Excel to Matlab you can simply save as a .csv file and read into Matlab using the textread [^] function. There are other ways, check the documentation.
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
I was wondering what folks thought about using fixed point on a modern desktop PC. I've done a few tests, and floating point seems faster. However, the algorithms I use for my DSP processing involve feedback. This leads to denormals. I can avoid them by adding a tiny constant value in the feedback loop. Which is ok, but with fixed point this is not a problem. So I'm considering rewriting at least some of my algorithms to used fixed point. I know this is something that you really have to play around with to find an optimal solution, but I was curious as to what others thought.
|
|
|
|
|
Leslie Sanford wrote: floating point seems faster.
Wow! This is the power of DSPs.
If it is really true then use they! It needs only more memory, I hope this will be not problem...
Russell
|
|
|
|
|
It is true - particularly with the SSE type instruction sets. For example, a 3.6GHz P4 (single core) can exceed 8GFLOPS on an FFT http://www.fftw.org/speed/Pentium4-3.60GHz-icc/[^]
scroll down to the single-precision complex 1D transforms
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
|
IMO there is no general answer, it depends on both the job at hand and the specifics of
the CPU.
If you need to add code to prevent integers from overflowing, you should consider floats,
since unpredictable conditional branches are very bad for performance.
If your inputs and outputs are integers, using floats will cost conversion times; most modern
CPUs have different register sets for ints and floats, and can pass data between them only
through "memory" (i.e. the data cache), adding to latency, and probably taking away some
performance.
If you need transcendental functions, floats are the obvious choice.
If you have streaming data the CPU might be optimized more for one or the other.
The Intel processor line has a long tradition of favoring integers over floats when spending
the transistor budget.
If your CPU is general-purpose (as opposed to DSP) and your algorithm needs a lot of
address calculations and indexing, you should consider float data, so the integer ALUs
are available for sequencing and addressing, the float ALU(s) for data operations.
I once found the optimum implementation of an image processing algorithm by doing half
of the pixels in integers, and the other half in floats at the same time, so all integer
and float units of the CPU were kept busy as much as possible.
Luc Pattyn [Forum Guidelines] [My Articles]
this weeks tips:
- make Visual display line numbers: Tools/Options/TextEditor/...
- show exceptions with ToString() to see all information
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
Luc Pattyn wrote: If you need to add code to prevent integers from overflowing, you should consider floats,
since unpredictable conditional branches are very bad for performance.
This is a bit off-topic, but I've been curious about how the compiler treats predictable branches. For example, say I have a function that is passed a boolean value. Within this method is a loop in which the boolean is tested:
void Function(bool condition)
{
for(int i = 0; i < 1000; i++)
{
if(condition)
{
}
else
{
}
}
}
The obvious cure for getting the branch out of the loop would be to nest the loop inside the if and else blocks; you'd wind up with two version of the loop. I've done this in some situations, but in others, it can lead to exceedingly verbose code.
So my question is that in the above situation, we have a condition that will not change for the duration of the function call, i.e. a predictable branch. Can the compiler help us out here to keep things efficient?
Luc Pattyn wrote: If your inputs and outputs are integers, using floats will cost conversion times; most modern
CPUs have different register sets for ints and floats, and can pass data between them only
through "memory" (i.e. the data cache), adding to latency, and probably taking away some
performance.
If you need transcendental functions, floats are the obvious choice.
If you have streaming data the CPU might be optimized more for one or the other.
The Intel processor line has a long tradition of favoring integers over floats when spending
the transistor budget.
If your CPU is general-purpose (as opposed to DSP) and your algorithm needs a lot of
address calculations and indexing, you should consider float data, so the integer ALUs
are available for sequencing and addressing, the float ALU(s) for data operations.
Based on this criteria, I think I'll stick with floats. I seem to have the denormal problem under control, which was my main motivation for switching. I'm working on a VST synth plugin; the input and output to and from the plugin respectively are in floats, so it probably doesn't make sense to change to integers.
Thanks again for your insightful answers.
|
|
|
|
|
Hi Leslie,
you're welcome.
on the for/if issue:
when the if is the only statement inside the for loop (and the condition is invariant as
you stated), I will turn them inside out, as it makes more sense to me to have the slow
changing (or constant) thing first:
if (condition) {
for(loop_control) { stuff }
} else {
for(loop_control) { other_stuff }
}
I can't recollect having seen a compiler do that transformation for me; of course if
condition is a build-time constant, the idle half of the code is thrown out by most compilers.
Anyway I am inclined not to rely on a compiler for the big decisions, I'll be happy if
it does a good job at peephole optimizations.
If the if is not the only statement in the for loop, my decision will depend on how much code
there is to duplicate; obviously I hate to duplicate code for lots of reasons, so I will
consider that only if maximum performance is relevant.
Greetings
Luc Pattyn [Forum Guidelines] [My Articles]
this weeks tips:
- make Visual display line numbers: Tools/Options/TextEditor/...
- show exceptions with ToString() to see all information
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
If you are not familiar with it, you should look at the Intel compiler. They also have some high performance primitives libraries available that may be of use - see the MKL[^] and IPP[^], and some good booksk on optimization:
The Software Optimization Cookbook[^]
The Software Vectorization Handbook[^]
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
Leslie Sanford wrote: floating point seems faster
It will be - it's got a native, hardware implementation, whereas fixed point hasn't. In the best case (performance-wise) implementation of fixed point, additive operations should be transformed into single integer operations (unless you're checking for overflow, which'll add a whole lot more overhead, including branching). Multiplicative operations will require re-scaling of the output, which adds extra overhead as well.
I believe that modern CPUs do float operations in times comparable to the integer equivalents, so there's likely little or no performance hit from using floats.
I work with embedded control systems using relatively slow CPUs (ranging from 25MHz 680x0 to 150MHz PowerPC). We transitioned to using floating-point maths from fixed-point when we started using PowerPCs, because it performed as well (or better) than fixed point operations and also had a better range/precision trade-off than fixed-point.
|
|
|
|
|
Can anyone help me work out a solution to this?
I have 15 different number, lets call these num1,num2,num3,num4 etc..
I want to be able to select the 5 largest numbers out of the 15.
There must be some methodical way to search through and compare the values to get the 5 largest.
Any ideas?
Cheers for your help.
|
|
|
|
|
The easy way (not the most efficient one!): sort the entire input collection, then
throw away all but the top 5.
The less expensive way: create an output collection, initially empty.
loop over all items in your input collection; compare them to the output collection,
if fewer than 5 items, add; if not, compare; if larger than the smallest, remove smallest and
add new one, otherwise skip. For best performance keep the output collection sorted all the
time.
Luc Pattyn [Forum Guidelines] [My Articles]
this weeks tips:
- make Visual display line numbers: Tools/Options/TextEditor/...
- show exceptions with ToString() to see all information
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
Are you sure the overhead of continually resorting the output collection is less than just sorting the main list once?
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
Actually you don't need to keep the output collection sorted, I was wrong there,
the only thing that matters is only the smallest item gets replaced, so a single compare
pass suffices, which means at most 10*5 compares.
Luc Pattyn [Forum Guidelines] [My Articles]
this weeks tips:
- make Visual display line numbers: Tools/Options/TextEditor/...
- show exceptions with ToString() to see all information
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
If you need to output many items, you should use a heap to efficiently remove the smallest item. That way you get O(n lg m). (n = number of input items, m = number of output items)
|
|
|
|
|
I agree for sufficient output size there is a tree-based optimum somewhere in the middle of
the SortedList and the ArbitraryList; unfortunately .NET does not provide such MinHeap.
Luc Pattyn [Forum Guidelines] [My Articles]
this weeks tips:
- make Visual display line numbers: Tools/Options/TextEditor/...
- show exceptions with ToString() to see all information
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
At others replyed you can sort the array and then extraxt the maxs, this in 2 step. But the faster way depends probally on the length of your data array and on the number of extractions: in your case this numbers are 15 and 5, I think that you can fastly sort the array, so then it is probally a good idea to do this. But in the case of 100 and 3 probally you can find a faster way without sort the array and find directly the greather 3 numbers.
So, I want only to tell you a different way to solve this:
1) find the min value (minV)
2) find the max value and its index (maxV1, idxV1) then save it somewhere
3) replace maxV1 with minV in the original array
4) find the max value, so: maxV2 and idxV2, then store maxV2 somewhere and replace it with minV in the array
5) do this again and again...5 times
Probally this way can be faster on longer array (like 100) because in this way you have to read data from the array less times then to sort it
Russell
|
|
|
|
|
I need some help programming my own magic square in matlab without using the magic function my input should look something like this : [a b]=my_magic(5,10,4,3) and the out put of a should look like a 5x5 matrix with ten starting at row 4 column 3 and b should spit out the sum
Thank You
Rami
|
|
|
|
|