|
Maybe the Equation editor in MS Word, or as a graphics image if you use Latex or Tex...
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
Paul Conrad wrote: Maybe the Equation editor in MS Word, or as a graphics image if you use Latex or Tex...
The equation editor emits images that would need to be inserted into the article.
Is there any way of embedded Latex into the HTML code and having that automatically generate the images?
|
|
|
|
|
Hi, I guess there are lots of equation editors (MS Word has one) that can emit HTML code;
such code could be pasted into an article.
Luc Pattyn [Forum Guidelines] [My Articles]
this months tips:
- use PRE tags to preserve formatting when showing multi-line code snippets
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
Luc Pattyn wrote: Hi, I guess there are lots of equation editors (MS Word has one) that can emit HTML code;
such code could be pasted into an article.
Thanks for the suggestion. Just tried it and it generates images that would need to be inserted into the article.
I was hoping to find a way to embed something like Latex into the article HTML and automatically generate the images.
|
|
|
|
|
|
Check the DoxyGenerator that allows you to document your code with latex Entries.
It Generates Automatically the htmls with the pngs that represents the equations
JO
|
|
|
|
|
One BYTE variable can store 256 distinct values.
One short variable can store 65535 distinct values.
Can one BYTE or short datatype variable store a value between 0 and 999999 ?
Any intermediate variables , arrays can be utilised which will help in packing single BYTE or short variable with values from 0 to 999999.
Thanks & Regards.
|
|
|
|
|
ccpptrain wrote: Can one BYTE or short datatype variable store a value between 0 and 999999 ?
Oh yes (just matter of mapping), but NOT ANY VALUE between 0 and 999999.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
|
|
|
|
|
CPallini wrote: but NOT ANY VALUE between 0 and 999999.
I believe the proper term would be ALL values between 0 and 999999. Using a replacement map, I can map byte==0 to any number of bits, but I can store at most 256 values in my map.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
El Corazon wrote: I believe the proper term would be ALL values between 0 and 999999
I believe the proper term is EVERY value between 0 and 999999.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
|
|
|
|
|
damn... got me... okay... I stand... err... sit corrected.
This reminds me of an old joke about mapping my college prof told.... what is the size limit of values you can store in one bit? no limit on size, but you can only store two of them.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
A byte can store 256 of the million values, a short can store 65536 of the million directly, but if you need to be able to store the entire range you're going to need to use a minimum of 20 bits, and unless your storage requirements are very tight relative to your hardware performanace bit packing isn't going to be worth the CPU overhead vs storing in a native 32 bit value.
If you're archiving data or transmitting it in packets instead of streaming it in realtime you can use data compression, and potentially be able to use <16bits per value.
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
Thanks all for correcting my question and giving suggestion.
I am having 2D array of size say 1000*1000
In this 2D array each element has any value between 0 to 999999
So decided to take int Aint[1000][1000] as array.
There are only 300 unique elements in this array and all other elements are repeated,
i.e this array has repeated values with only 300 unique values.
Also this array takes 1000*1000*4 = 4000000 bytes.
So I am trying to construct an array
int u[300] which will contain the unique values from Aint[1000][1000] .
And instead of making int Aint[1000][1000] I decided to make BYTE Abyte[1000][1000] as my 2D array.
Now this Abyte 2D array's elements should contain any values from 0 to 299.
Previously:
int value = Aint[750][750] = 999999;
Now I will fetch value in this manner:
Abyte[750][750] = 290;<br />
u[290] = 999999; <br />
int value = u[ Abyte[750][750] ] = 999999;
But problem is Abyte's element can have values between 0 and 255.
I need to store 290 in single BYTE.
This approach has reduced memory requirement from 4000000 Bytes ( sizeof(Aint) )
to
1001200 bytes ( sizeof (Abyte) + sizeof(u) )
With this approach page faults can be reduced.
Thanks & Regards.
|
|
|
|
|
You cannot store more than 256 unique values in individual cells of a Byte array. You could reduce your memory footprint in two ways.
The first would be to compress the array in memory. The cost of that would be the loss of easy random access. Instead of O(C) performance to access an arbitrary value you'd have O(N) performance, like a linked list. IIRC with most modern compression algos modifying a value will require recompressing the array from the insert point forward, making an insert even more expensive.
The second would be to use bit packing, to simulate a 9bit int. This would remain O(C) performance, but the constant value would be much higher.
You'd probably need to implement both these approaches and try multiple compression algos for the first (compressed size vs speed to partially decompress to get an arbitrary value) and do performance testing to see if these are actually faster than using an extra 1 meg of memory (using a 16bit int).
Unless you're working on an embedded system with very tight memory constraints however I'm doubtful that either will actually be faster if you're doing any nontrivial examination/processing of the data.
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
Thanks for the pointers...
|
|
|
|
|
Hello,
Your programming problem is interesting. As someone already pointed out, what you actually need is 9 bit precision to store values ranging from 0 to 299.
A simple approach is packing these 9th bits into another byte, this would require an additional 1000 by 125 array (125 since we will pack these bits in a single byte, 1000 / 8 = 125)
Here is a simple class I wrote that does precisely that. Since your primary concern is reducing memory consumption, instead of using jagged arrays as you suggest ([][]), I'm proposing multidimensional arrays [,]. Multidimensional arrays consume less memory than jagged arrays.
using System;
using System.Diagnostics;
namespace stuff {
public class MappedArray {
private byte[,] _aByte = new byte[1000, 1000];
private byte[,] _msBit = new byte[1000, 125];
private int [] _U = new int[300];
private int _uniqueCounter = 0;
public MappedArray() {
}
protected int GetMappedIdx(int aValue) {
for( int i = 0; i < _uniqueCounter; i++ )
if( _U[i] == aValue )
return i;
if( _uniqueCounter >= 300 )
throw new Exception("more than 300 unique elements");
int idx = _uniqueCounter++;
_U[idx] = aValue;
return idx;
}
public void Set(int x, int y, int aValue) {
Debug.Assert(x < 1000 && y < 1000);
int mappedID = GetMappedIdx(aValue);
_aByte[x,y] = (byte)(mappedID & 0xff);
int packedIdx = y / 8;
byte msBits = _msBit[x, packedIdx];
int bitpos = y % 8;
if( mappedID >= 256 ) {
msBits|= (byte)( 1 << bitpos);
} else {
msBits&= (byte)~( 1 << bitpos);
}
_msBit[x, packedIdx] = msBits;
}
public int Get(int x, int y) {
Debug.Assert(x < 1000 && y < 1000);
byte lsBits = _aByte[x,y];
int packedIdx = y / 8;
byte msBits = _msBit[x, packedIdx];
int bitpos = y % 8;
byte bitValue = (byte)(msBits & (1 << bitpos));
int idx = lsBits;
if( bitValue != 0 )
idx |= 0x100;
return _U[idx];
}
}
}
This is the sample code I used to test the class was working correctly. I simply fill the array (Set method) with 300 unique values ranging from 0 to 999999 and test that the retrieved values (Get method) are the same.
private void button1_Click(object sender, System.EventArgs e) {
MappedArray ma = new MappedArray();
int[] rndArr = new int[300];
Random rnd = new Random();
for( int i = 0; i < 300; i++ )
rndArr[i] = rnd.Next(999999);
for( int i = 0; i < 1000; i++ )
for( int j = 0; j < 1000; j++ ) {
int val = rndArr[rnd.Next(300)];
ma.Set(i, j, val);
if( ma.Get(i,j) != val )
throw new Exception("oops, something's wrong with the mapping logic");
}
}
Hope this helps!
Gerardo
|
|
|
|
|
I'm working on a project which needs to use a fair number of trigonometry functions and I was wondering how slow these functions are compared to more normal operations like addition, subtraction, multiplication and division (obviously they're slower but I was wondering how much and which is the slowest). In particular I was interested in square roots, sin, arcsin, cos, arccos.
Also does anyone know of a link that would explain how the computer/programming languages calculates these functions?
thanks,
Mike
|
|
|
|
|
MikeMarq wrote: I'm working on a project which needs to use a fair number of trigonometry functions and I was wondering how slow these functions are
The first order response is: they are all equally slow, anywhere between 10 and 100 times
slower than multiplication.
If it is really important to you, perform some simple experiments to see how they behave
on YOUR system with YOUR software environment.
some Google result[^].
Luc Pattyn [Forum Guidelines] [My Articles]
this months tips:
- use PRE tags to preserve formatting when showing multi-line code snippets
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
Not sure from the .NET stand point, but the trig instructions on the cpu can consume quite a few clock cycles ...
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
Although the CRT is free to wrap, or even ignore, the CPU float functions when calculating sqrt, sin, cos, ... the following should give a rough idea of relative costs.
http://www.singlix.com/trdos/pentium.txt[^]
So, on a pentium, the following asm instructions take the following # of clock cycles:
FADD, FSUB, FMUL : 3
FDIV : 39
FSQRT : 70
FSIN, FCOS, FSINCOS, FPTAN : ~17-140
So, from the above we can make the following general recomendations:
- If you need to divide several values by the same amount, instead multiple them by the reciprical amount.
- sqrt is bad, try to avoid whenever possible.
- If you need both sin and cos use sincos, you get one for free.
- based on the low value for the trig functions it seems _likely_ (not sure) that they impliment a lookup table for some values, so adding your own on top will likely give little, if any, improvement.
...cmk
The idea that I can be presented with a problem, set out to logically solve it with the tools at hand, and wind up with a program that could not be legally used because someone else followed the same logical steps some years ago and filed for a patent on it is horrifying.
- John Carmack
|
|
|
|
|
cmk wrote: FSIN, FCOS, FSINCOS, FPTAN : ~17-140
At least that wasn't the specs I saw when I first got my 80386 about 16 years ago (it was really hideous) :->
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
remember the 386 had to do each flop in software as well because it didn't have an FPU, so you were taking a double hit there.
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
My 386 had a math co-processor with it Made a big difference.
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
you only mentioned a 386 not a 387
--
If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
|
|
|
|
|
My bad I had a 386DX-25mhz to be exact. 4 megs of ram and an 80meg harddrive. Thought I was pretty slick back then :->
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|