|
damn... got me... okay... I stand... err... sit corrected.
This reminds me of an old joke about mapping my college prof told.... what is the size limit of values you can store in one bit? no limit on size, but you can only store two of them.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
A byte can store 256 of the million values, a short can store 65536 of the million directly, but if you need to be able to store the entire range you're going to need to use a minimum of 20 bits, and unless your storage requirements are very tight relative to your hardware performanace bit packing isn't going to be worth the CPU overhead vs storing in a native 32 bit value.
If you're archiving data or transmitting it in packets instead of streaming it in realtime you can use data compression, and potentially be able to use <16bits per value.
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
Thanks all for correcting my question and giving suggestion.
I am having 2D array of size say 1000*1000
In this 2D array each element has any value between 0 to 999999
So decided to take int Aint[1000][1000] as array.
There are only 300 unique elements in this array and all other elements are repeated,
i.e this array has repeated values with only 300 unique values.
Also this array takes 1000*1000*4 = 4000000 bytes.
So I am trying to construct an array
int u[300] which will contain the unique values from Aint[1000][1000] .
And instead of making int Aint[1000][1000] I decided to make BYTE Abyte[1000][1000] as my 2D array.
Now this Abyte 2D array's elements should contain any values from 0 to 299.
Previously:
int value = Aint[750][750] = 999999;
Now I will fetch value in this manner:
Abyte[750][750] = 290;<br />
u[290] = 999999; <br />
int value = u[ Abyte[750][750] ] = 999999;
But problem is Abyte's element can have values between 0 and 255.
I need to store 290 in single BYTE.
This approach has reduced memory requirement from 4000000 Bytes ( sizeof(Aint) )
to
1001200 bytes ( sizeof (Abyte) + sizeof(u) )
With this approach page faults can be reduced.
Thanks & Regards.
|
|
|
|
|
You cannot store more than 256 unique values in individual cells of a Byte array. You could reduce your memory footprint in two ways.
The first would be to compress the array in memory. The cost of that would be the loss of easy random access. Instead of O(C) performance to access an arbitrary value you'd have O(N) performance, like a linked list. IIRC with most modern compression algos modifying a value will require recompressing the array from the insert point forward, making an insert even more expensive.
The second would be to use bit packing, to simulate a 9bit int. This would remain O(C) performance, but the constant value would be much higher.
You'd probably need to implement both these approaches and try multiple compression algos for the first (compressed size vs speed to partially decompress to get an arbitrary value) and do performance testing to see if these are actually faster than using an extra 1 meg of memory (using a 16bit int).
Unless you're working on an embedded system with very tight memory constraints however I'm doubtful that either will actually be faster if you're doing any nontrivial examination/processing of the data.
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
Thanks for the pointers...
|
|
|
|
|
Hello,
Your programming problem is interesting. As someone already pointed out, what you actually need is 9 bit precision to store values ranging from 0 to 299.
A simple approach is packing these 9th bits into another byte, this would require an additional 1000 by 125 array (125 since we will pack these bits in a single byte, 1000 / 8 = 125)
Here is a simple class I wrote that does precisely that. Since your primary concern is reducing memory consumption, instead of using jagged arrays as you suggest ([][]), I'm proposing multidimensional arrays [,]. Multidimensional arrays consume less memory than jagged arrays.
using System;
using System.Diagnostics;
namespace stuff {
public class MappedArray {
private byte[,] _aByte = new byte[1000, 1000];
private byte[,] _msBit = new byte[1000, 125];
private int [] _U = new int[300];
private int _uniqueCounter = 0;
public MappedArray() {
}
protected int GetMappedIdx(int aValue) {
for( int i = 0; i < _uniqueCounter; i++ )
if( _U[i] == aValue )
return i;
if( _uniqueCounter >= 300 )
throw new Exception("more than 300 unique elements");
int idx = _uniqueCounter++;
_U[idx] = aValue;
return idx;
}
public void Set(int x, int y, int aValue) {
Debug.Assert(x < 1000 && y < 1000);
int mappedID = GetMappedIdx(aValue);
_aByte[x,y] = (byte)(mappedID & 0xff);
int packedIdx = y / 8;
byte msBits = _msBit[x, packedIdx];
int bitpos = y % 8;
if( mappedID >= 256 ) {
msBits|= (byte)( 1 << bitpos);
} else {
msBits&= (byte)~( 1 << bitpos);
}
_msBit[x, packedIdx] = msBits;
}
public int Get(int x, int y) {
Debug.Assert(x < 1000 && y < 1000);
byte lsBits = _aByte[x,y];
int packedIdx = y / 8;
byte msBits = _msBit[x, packedIdx];
int bitpos = y % 8;
byte bitValue = (byte)(msBits & (1 << bitpos));
int idx = lsBits;
if( bitValue != 0 )
idx |= 0x100;
return _U[idx];
}
}
}
This is the sample code I used to test the class was working correctly. I simply fill the array (Set method) with 300 unique values ranging from 0 to 999999 and test that the retrieved values (Get method) are the same.
private void button1_Click(object sender, System.EventArgs e) {
MappedArray ma = new MappedArray();
int[] rndArr = new int[300];
Random rnd = new Random();
for( int i = 0; i < 300; i++ )
rndArr[i] = rnd.Next(999999);
for( int i = 0; i < 1000; i++ )
for( int j = 0; j < 1000; j++ ) {
int val = rndArr[rnd.Next(300)];
ma.Set(i, j, val);
if( ma.Get(i,j) != val )
throw new Exception("oops, something's wrong with the mapping logic");
}
}
Hope this helps!
Gerardo
|
|
|
|
|
I'm working on a project which needs to use a fair number of trigonometry functions and I was wondering how slow these functions are compared to more normal operations like addition, subtraction, multiplication and division (obviously they're slower but I was wondering how much and which is the slowest). In particular I was interested in square roots, sin, arcsin, cos, arccos.
Also does anyone know of a link that would explain how the computer/programming languages calculates these functions?
thanks,
Mike
|
|
|
|
|
MikeMarq wrote: I'm working on a project which needs to use a fair number of trigonometry functions and I was wondering how slow these functions are
The first order response is: they are all equally slow, anywhere between 10 and 100 times
slower than multiplication.
If it is really important to you, perform some simple experiments to see how they behave
on YOUR system with YOUR software environment.
some Google result[^].
Luc Pattyn [Forum Guidelines] [My Articles]
this months tips:
- use PRE tags to preserve formatting when showing multi-line code snippets
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
Not sure from the .NET stand point, but the trig instructions on the cpu can consume quite a few clock cycles ...
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
Although the CRT is free to wrap, or even ignore, the CPU float functions when calculating sqrt, sin, cos, ... the following should give a rough idea of relative costs.
http://www.singlix.com/trdos/pentium.txt[^]
So, on a pentium, the following asm instructions take the following # of clock cycles:
FADD, FSUB, FMUL : 3
FDIV : 39
FSQRT : 70
FSIN, FCOS, FSINCOS, FPTAN : ~17-140
So, from the above we can make the following general recomendations:
- If you need to divide several values by the same amount, instead multiple them by the reciprical amount.
- sqrt is bad, try to avoid whenever possible.
- If you need both sin and cos use sincos, you get one for free.
- based on the low value for the trig functions it seems _likely_ (not sure) that they impliment a lookup table for some values, so adding your own on top will likely give little, if any, improvement.
...cmk
The idea that I can be presented with a problem, set out to logically solve it with the tools at hand, and wind up with a program that could not be legally used because someone else followed the same logical steps some years ago and filed for a patent on it is horrifying.
- John Carmack
|
|
|
|
|
cmk wrote: FSIN, FCOS, FSINCOS, FPTAN : ~17-140
At least that wasn't the specs I saw when I first got my 80386 about 16 years ago (it was really hideous) :->
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
remember the 386 had to do each flop in software as well because it didn't have an FPU, so you were taking a double hit there.
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
My 386 had a math co-processor with it Made a big difference.
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
you only mentioned a 386 not a 387
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
My bad I had a 386DX-25mhz to be exact. 4 megs of ram and an 80meg harddrive. Thought I was pretty slick back then :->
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
I wish I could find it right now...
For each computer I got (from 8088 up until a Pentium-Pro) I used to update a table where I showed the speed of all the trig functions along with mul and div. It was quite amusing to see how they sped up through the years. If I find it in the next couple of days, I'll post it up here....
'droid
|
|
|
|
|
Thanks everybody for the help. Wow it's suprising that division is that much slower than multiplication and sqrt is only double division. Interesting.
|
|
|
|
|
Here are some interesting benchmarks of the Intel vector maths library - shows what can be achieved if the code can be vectorized; VML Performance[^]
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
Hi Mike,
the other replies gave an impression of what you might be able to get, not of what you
will actually get. The only way to know is to measure it! And what you get may well vary by
CPU type, and application characteristics (float/double, integer/float balance, ...).
Luc Pattyn [Forum Guidelines] [My Articles]
this months tips:
- use PRE tags to preserve formatting when showing multi-line code snippets
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
I'm surprised no one mentioned this, but you can precompute the trig functions and build a lookup table. The lookup is faster than the computation.
There's a tradeoff between the size of your table and precision.
For values between two lookup table entries you can use a weighted sum to improve precision.
|
|
|
|
|
You can use approximate calculation instead. For example: sin(x) = x-(1/2)x^2+...
This works for x near 0 !!!
|
|
|
|
|
hey can any one give me an algorithim on comparing two pictures
|
|
|
|
|
public static bool ImagesEqual(image1, image2) {
if (wid1!=wid2) return false;
if (hei1!=hei2) return false;
if (crc1!=crc2) return false;
foreach(PixelPair p1,p2 in image1,image2) if (p1!=p2) return false;
return true;
}
Luc Pattyn [Forum Guidelines] [My Articles]
this months tips:
- use PRE tags to preserve formatting when showing multi-line code snippets
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
Would use XOR functions ( blend with XOR , I think its possible with GDI images ... I used it some time ago to overlay two biztmaps to see where tehre is a differnce )
|
|
|
|
|
Hi,
it depends what you like to match. If the pictures need to be exactly == binary identical, you could just use a md5 checksum, or something like this.
If you want to search for "similar" pictures e.g. showing the same thing but scaled (image for email, etc. ) then you could use a cooccourence matrix over the colors in the image.
If you want to check also if the contrast was changed then you need an even more elaborate algorithm, maybe doing a color histogram and then do a cross correllation. But these are only ideas...
Regards,
Tobias
|
|
|
|