|
Have you tried debugging it? Single step through the code and check that each line produces what you expect.
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
Oh hey thanks that helped. I ended up finding out that I wrote:
int b = (int) y/19-when it was supposed to be "y/100."
|
|
|
|
|
I'm looking for some algorithm to help me in plotting large amount of data.
The situation is like this: I have some experimental data points that I want to show in a window, and I want to be able to zoom everywhere in the graph. The data have been recorded for 1 to 2 hours at 10 kHz, so each graph would have 30-70 million points in it.
Of course plotting ALL of the point is way too slow (and quite pointless), so my first -a bit naive- approach to the problem was to plot a point every x, where x depends on the zoom level, eventually plotting each point at very high zoom levels. This works fine but it has a majour problem: these plots, in fact, are not uniform, they have "spikes" that last few milliseconds and I'm interesting in seeing these events at all zoom level (so that I can select that area and zoom into it).
Any idea on how to get around this?
Thanks in advance
nico
|
|
|
|
|
You could try an algorithm that HP used to use in their Spectrum Analyzers. Take whatever range of data you want to plot, say 1 million points, and map this to say 1000 plot points. They would divide the x-axis up into 500 bins of 2000 points each, and in each bin they would determine the max and min value of the 2000 samples. They would then plot the 1000 points min1, max1, min2, max2, min3, max3, ... where {min1, max1} are the min and max values in the first bin etc. If the number of bins you use is greater than the number of pixels across your display then you won't be able to see any difference between the sampled graph and the original one. So visually this looks identical to plotting all the data, but it will plot / update much much faster. Of course if you zoom in significantly you have to recalculate the bins and the max/mins otherwise you will see the sampling artifacts.
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
That sounds good! I'll try it, thanks!
nico
|
|
|
|
|
I implemented the solution you suggested and it seems to work really well!
I just have to fine-tune it a little bit for my particular application but it's a very good start!
Thanks a lot!
nico
|
|
|
|
|
You're welcome
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
I like that approach!
ROFLOLMFAO
|
|
|
|
|
|
I saw a PBS show about this very thing over a year ago. I've been wondering how much more advanced we would be right now if that knowledge had passed down through the ages?
|
|
|
|
|
Unfortunately, we'll never know. But in this case, it wasn't religion's fault.
|
|
|
|
|
Is there an efficient algorithm to fill memory with an arbitrary pattern?
void memoryfill( void* target, char* pattern, size_t patlen, size_t numRepeats )
{
if( 1 == patlen ) memset(target, *pattern, numRepeats);
else ???
} I came up with something that makes 2*log(numRepeats) calls to memcpy(), but figured this must be a standard thing to do, and there is probably a good algorithm (better than mine?) already out there. I couldn't find anything on CodeProject, or on Google.
David
|
|
|
|
|
|
You can use destructive overlap to write your fill pattern as fast as your CPU can go. Here's the algorithm:
1. Copy one instance of the pattern to target.
2. Copy memory from target to (target + patlen) with length (numRepeats - 1) * patlen. (This is one function call.)
I believe this is the most efficient algorithm possible. The implementation of memcpy will probably be more efficient than anything you could write in a higher-level language. Also you're copying from addresses you just wrote to, so they'll already be in the cache. This lets the CPU avoid getting these bytes from (relatively) slow external memory chips. This algorithm should fly like the wind!
|
|
|
|
|
IF (a few critical things...) then I think you're right. I think your algorithm will work really really good. I'm working on other things, and don't really want to write a timing test, but I believe you're probably right about it being the fastest possible algorithm.
Now for the critical things:
IF memcpy copies from the end of the array instead of from the base, this algorithm won't work at all. Depending on how it computes the target (if copying from the end), it'll either copy garbage until it finally hits the first pattern, and then make one good copy of it, or it'll make one good copy first, then cause a GPF by attempting to overwrite memory to the left of the src. Here's a quote from "STANDARD C" by P.J. Plauger and Jim Brodie regarding memcpy: "The elements of the array can be accessed and stored in any order." It could even start in the middle of the array if that favors the target environment.
I suppose one could bracket the code with #if's, assuming one could absolutely determine how memcpy worked; and guarantee that it never changes it's mind depending on any number of things.
The other alternative is to simply write the algorithm myself in C. This way I could guarantee copying from the base of the array instead of from the end. But, as you point out, memcpy is highly tuned for the machine architecture, and my "roll your own" isn't going to work as well.
Thanks a lot anyway. I really appreciate the thoughts. Maybe your idea will come in handy in some of my future projects. It was a really good idea.
David
|
|
|
|
|
After thinking a bit more about it, I realized the stuff about the GPF is probably not correct. It would just copy garbage.
As I think thru things, I try to document them so I (or someone else) won't come back later on and try to improve on the code without throughly thinking things thru. Based on your suggestion, I've added the following comments to my memoryFill() function.
void memoryFill( void* target, char* pattern, size_t ptrnlen, size_t numRepeats )
{
char* trgt = (char*)target;
...
...
}
David
|
|
|
|
|
Hello,
is there any known algorithm to check how different are two files,
i mean to check how big is a difference between two files?
regards
termal
|
|
|
|
|
Google on diff algorithms, there are a couple of articles on CP on this - search for diff
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
Hello,
thanks very much for answer!
regards
termal
|
|
|
|
|
You don't need an algorithm to check the difference in size between two files. You can just read each file one line at a time and count the bytes of each or if the files are small get a byte array of the files and then comapare the sizes of the arrays.
There are 10 types of people in the world, those who understand binary and those who dont.
|
|
|
|
|
smyers wrote: You don't need an algorithm to check the difference in size between two files. You can just read each file one line at a time and count the bytes of each or if the files are small get a byte array of the files and then comapare the sizes of the arrays.
Why would I need to do that? Any language should include calls to return size of a file...
|
|
|
|
|
I googled it, you can use the FileInfo.Length property...sorry
There are 10 types of people in the world, those who understand binary and those who dont.
|
|
|
|
|
in C you can use like this,to find the size of the file
_chdir(path);//path of your required file's folder
struct_finddata_t fileinfo;
long k=_findfirst("*.*",&fileinfo);
if(fileinfo.name=filename)//filename of your file
{
//here, you can find the filesize in the fileinfo structure.
}
otherwise, findnext();
and repeat.
Suggestion to the members:
prefix your main thread subject with [SOLVED] if it is solved.
chandu.
|
|
|
|
|
Check out www.groganenterpriseservices.com/textcompare. I use this facility to compare two MSINFO32 export files. Size is limited to 5k lines for web access. It calculates diffs using CRC's and runs like a bat. If you need the source or have larger files and would like to use it let me know. marty redondowa com
Been there, done that, forgot again.
|
|
|
|
|
Hello, I have considered doing following project:
Imagine that you have simple graphics (like some schematics, logo, etc). I'd like to investigate some other approaches to saving the image like traditional bitmap/vector files.
It would use some kind of advanced algorithms to generate set of graphical operations (draw line/shape/curve, fill shape, draw text somewhere) that would be drawn to some bitmap. The program would then evaluate how much the generated image look like the original (like fitness function in GA), I mean, evaluate many solutions, find the best ones, combine/mutate them in some way to produce better results in the next generation.
It would not be saving the image in the true sense of word, it would be more like finding algorithm on how to recreate it with the best precision using elementary graphical operations. Maybe it would be useless, maybe not.
My question is, *how* should I start? I read something on the topic of genetic algorithms, but most of the articles dealt with using binary string (genes) for representing the operations (but wouldn't it be too long to encode many (tens, hundreds) graphical operations with parameters?) I mean, if we had about hundred operations (8-bit identifier) with average of four 16-bit parameters, it would be about 100*8+100*4*16=7200 bits of information to combine, mutate and eventually, evaluate.
I know it will be painfully slow, to have some kind of fitness evaluation function that would need to draw image from the "genes", but this would be a research project. I am just wondering if something like this is possible (I think it could be), and if yes, someone could please point me in the right direction to start.
thanks,
Juraj
|
|
|
|