|
Sorry, probally I was not clear:
In the first step use only parm1 , parm2 , ... parmN .
where N is the number of parameters that you are talking about in the first post.
In the step 2 use:
parm1 , parm2 , ... parmN , and parmN+1 (i.e. the time)
put everithing here
F_Variation=SumOn(i,j)[parm_i*parm_j*calib(i,j)]
and now you have only to find the right values ot the calib matrix (it will be (N+1)x(N+1) ).
To solve it a possible way is to minimize the RMS method (mean squared error).
Russell
|
|
|
|
|
Just for Info...
after almost three weeks breaking my brain with the f u c k i n g formula, now is not needed . The client and my boss just choose to introduce a fixed priority given in the GUI, and then choose the maschines with that order. In case of having more than one mashine with the same value, then take a look into Energy and ReactionTime to select one of them.
It's to kill them
But as I always say... there is no bad thing that nothing good brings. The programming will be easier
Greetings.
--------
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
|
|
|
|
|
What can I say? ...
Nelek wrote: The programming will be easier
It's true
Russell
|
|
|
|
|
I need sugge./ advice for video processing for bleow requirement
Person or object will be selected in the first frame of video sequence
manually by mouse. Now the selected object/persion should be segmented in
rest of the frames by the algorithm/porgram.
any souce code/algorithm/ links / suggestion are greatly appericated
Thanks in advance
->electron
|
|
|
|
|
|
Hi people
I have an idea for an algorithm, but am gonna need some help in actually implementing it.
What I need to do is read file paths from right to left, comparing all of them, finding the right-most unvarying element of the path.
An example of how I want this algorithm to work is below
C:\Root Dir\Const Sub Dir1\Sub Dir1
C:\Root Dir\Const Sub Dir1\Sub Dir2
C:\Root Dir\Const Sub Dir1
Path scanned was C:\Root Dir\Const Sub Dir1
Any ideas would be greatly appreciated
Cheers
Give me strength, give me caffeine
|
|
|
|
|
This might not be the most efficient way to do it, but break each path apart on \'s, and load all the data into a 2d array, individual paths in the rows, each folder in a separate column. The just compare one column at a time until you find one that isn't identical the whole way down.
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
Thanks a lot, didn't think about using 2d arrays, you know if there is a way to compare a whole column in one go in c#?
Give me strength, give me caffeine
|
|
|
|
|
well you could sort and compare the first and last items but that would be slower than doing an iterative comparison.
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
|
c#_keithy wrote: Any ideas would be greatly appreciated
Can't you just use PathCommonPrefix() ?
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
hi i am student.im doing research about development of adaptive digital notch filter for the removal of 50Hz power line noise in ECG.i have a sample of 50Hz noise of ECG..now i need to call the file in mathlab..how must i do it?im very new to mathlab..please help me..
|
|
|
|
|
Probally the best place where put this question is on the mathworks forum.
However to load that file you have write some code , look at functions like fopen, fclose and fread..
Is it a text file or binary? do you know how it is written? Sometimes some functions are yet implemented: if it is a .wav the functions are ready in matlab.
Russell
|
|
|
|
|
the file is in decimal(negative and positive)..the noise signal is created in excell..should i use the fopen,fclose and fread for this type of file?
|
|
|
|
|
ashwiny wrote: should i use the fopen,fclose and fread for this type of file?
Of course you can.
with that functions you can open every file type, the problem can be if you know the file format, but this time I'm thinking that it is an ASCII file so there isn't problems.
You will need to functions like fscanf to read numbers from file.
All this functions can be used also in a simple C++ program, but in matlab you can find ready filters and display tools.
Russell
|
|
|
|
|
To get your data from Excel to Matlab you can simply save as a .csv file and read into Matlab using the textread [^] function. There are other ways, check the documentation.
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
I was wondering what folks thought about using fixed point on a modern desktop PC. I've done a few tests, and floating point seems faster. However, the algorithms I use for my DSP processing involve feedback. This leads to denormals. I can avoid them by adding a tiny constant value in the feedback loop. Which is ok, but with fixed point this is not a problem. So I'm considering rewriting at least some of my algorithms to used fixed point. I know this is something that you really have to play around with to find an optimal solution, but I was curious as to what others thought.
|
|
|
|
|
Leslie Sanford wrote: floating point seems faster.
Wow! This is the power of DSPs.
If it is really true then use they! It needs only more memory, I hope this will be not problem...
Russell
|
|
|
|
|
It is true - particularly with the SSE type instruction sets. For example, a 3.6GHz P4 (single core) can exceed 8GFLOPS on an FFT http://www.fftw.org/speed/Pentium4-3.60GHz-icc/[^]
scroll down to the single-precision complex 1D transforms
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
|
IMO there is no general answer, it depends on both the job at hand and the specifics of
the CPU.
If you need to add code to prevent integers from overflowing, you should consider floats,
since unpredictable conditional branches are very bad for performance.
If your inputs and outputs are integers, using floats will cost conversion times; most modern
CPUs have different register sets for ints and floats, and can pass data between them only
through "memory" (i.e. the data cache), adding to latency, and probably taking away some
performance.
If you need transcendental functions, floats are the obvious choice.
If you have streaming data the CPU might be optimized more for one or the other.
The Intel processor line has a long tradition of favoring integers over floats when spending
the transistor budget.
If your CPU is general-purpose (as opposed to DSP) and your algorithm needs a lot of
address calculations and indexing, you should consider float data, so the integer ALUs
are available for sequencing and addressing, the float ALU(s) for data operations.
I once found the optimum implementation of an image processing algorithm by doing half
of the pixels in integers, and the other half in floats at the same time, so all integer
and float units of the CPU were kept busy as much as possible.
Luc Pattyn [Forum Guidelines] [My Articles]
this weeks tips:
- make Visual display line numbers: Tools/Options/TextEditor/...
- show exceptions with ToString() to see all information
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
Luc Pattyn wrote: If you need to add code to prevent integers from overflowing, you should consider floats,
since unpredictable conditional branches are very bad for performance.
This is a bit off-topic, but I've been curious about how the compiler treats predictable branches. For example, say I have a function that is passed a boolean value. Within this method is a loop in which the boolean is tested:
void Function(bool condition)
{
for(int i = 0; i < 1000; i++)
{
if(condition)
{
}
else
{
}
}
}
The obvious cure for getting the branch out of the loop would be to nest the loop inside the if and else blocks; you'd wind up with two version of the loop. I've done this in some situations, but in others, it can lead to exceedingly verbose code.
So my question is that in the above situation, we have a condition that will not change for the duration of the function call, i.e. a predictable branch. Can the compiler help us out here to keep things efficient?
Luc Pattyn wrote: If your inputs and outputs are integers, using floats will cost conversion times; most modern
CPUs have different register sets for ints and floats, and can pass data between them only
through "memory" (i.e. the data cache), adding to latency, and probably taking away some
performance.
If you need transcendental functions, floats are the obvious choice.
If you have streaming data the CPU might be optimized more for one or the other.
The Intel processor line has a long tradition of favoring integers over floats when spending
the transistor budget.
If your CPU is general-purpose (as opposed to DSP) and your algorithm needs a lot of
address calculations and indexing, you should consider float data, so the integer ALUs
are available for sequencing and addressing, the float ALU(s) for data operations.
Based on this criteria, I think I'll stick with floats. I seem to have the denormal problem under control, which was my main motivation for switching. I'm working on a VST synth plugin; the input and output to and from the plugin respectively are in floats, so it probably doesn't make sense to change to integers.
Thanks again for your insightful answers.
|
|
|
|
|
Hi Leslie,
you're welcome.
on the for/if issue:
when the if is the only statement inside the for loop (and the condition is invariant as
you stated), I will turn them inside out, as it makes more sense to me to have the slow
changing (or constant) thing first:
if (condition) {
for(loop_control) { stuff }
} else {
for(loop_control) { other_stuff }
}
I can't recollect having seen a compiler do that transformation for me; of course if
condition is a build-time constant, the idle half of the code is thrown out by most compilers.
Anyway I am inclined not to rely on a compiler for the big decisions, I'll be happy if
it does a good job at peephole optimizations.
If the if is not the only statement in the for loop, my decision will depend on how much code
there is to duplicate; obviously I hate to duplicate code for lots of reasons, so I will
consider that only if maximum performance is relevant.
Greetings
Luc Pattyn [Forum Guidelines] [My Articles]
this weeks tips:
- make Visual display line numbers: Tools/Options/TextEditor/...
- show exceptions with ToString() to see all information
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
If you are not familiar with it, you should look at the Intel compiler. They also have some high performance primitives libraries available that may be of use - see the MKL[^] and IPP[^], and some good booksk on optimization:
The Software Optimization Cookbook[^]
The Software Vectorization Handbook[^]
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
Leslie Sanford wrote: floating point seems faster
It will be - it's got a native, hardware implementation, whereas fixed point hasn't. In the best case (performance-wise) implementation of fixed point, additive operations should be transformed into single integer operations (unless you're checking for overflow, which'll add a whole lot more overhead, including branching). Multiplicative operations will require re-scaling of the output, which adds extra overhead as well.
I believe that modern CPUs do float operations in times comparable to the integer equivalents, so there's likely little or no performance hit from using floats.
I work with embedded control systems using relatively slow CPUs (ranging from 25MHz 680x0 to 150MHz PowerPC). We transitioned to using floating-point maths from fixed-point when we started using PowerPCs, because it performed as well (or better) than fixed point operations and also had a better range/precision trade-off than fixed-point.
|
|
|
|