|
The Grand Negus wrote: Besides, formulaic languages reach their "limits" very quickly (no pun intended). Consider, for example, this natural language description of an object:
a two-inch aluminum cube with a quarter-inch spherical void at its center
What's the formula for that? And for the other zillion things that can be easily described in half a sentence but that nevertheless defy mathematical description?
BS! Do you want to retract this example before or after I direct your attention to NURBS? A two inch cube with a cavity in the middle CAN be represented mathematically. Or did you forget about the existance of Maya, AutoCAD, Rhino3D, [many more]??
Dave Kreskowiak
Microsoft MVP - Visual Basic
|
|
|
|
|
xcom2001 wrote: as a teaching aide to help me understand how this works.
I will add some background, since ... other ... discussions led astray this discussion. Pseudo code is not meant to compile, deliberately. Think of it as programmer short-hand.
What I posted was close to Pascal in structure, but need not be, you will find C like references of pseudo code.
program Celsius To Fahrenheit (input: double celsius)
begin
// Convert Celsius to Fahrenheit.
double fahrenheit = (celsius * 9 / 5) + 32
return fahrenheit;
end
or purely algorithmic
Celsius to Farhrenheit
fahrenheit = (celsius*9/5)+32
return fahrenheit
Pseudo code originated as an alternative to flowcharts, which for Fortran programming could become very long and tedious drawings. The goal was to design your code, not write it "yet" so it should be A) shorter than your actual code B) be easy to write, without any formal rules, but should be consistent in what ever rules it uses. Back in the Fortran era, language, and column (yes good old punch-card limits) rules made full program writing difficult, thus pseudo code was a way of exchanging ideas and perfecting ideas.
There is a REAL reason behind this. As you are designing an algorithm you want to be able to shape it, change it rapidly, yet also be able to understand it. If it follows any persistent "syntax" it gets bogged down in rules than make it wordy and/or impossible to maintain, again like flow-charts. It should be consistent in what few rules it does use for ease of peer-exchanges.
The goal is not to finish your code, period. Not with pseudo code at least. Pseudo code or UML or flowcharts are what you bring to design-reviews. Obviously if you have a finished product, and you made poor assumptions in your design, you've just wasted everyone's time by trying to short-cut the process. So Pseudo code is fast, friendly and "similar" to what ever language you are using with absolutely the most minimal rules.
In this way a design review can review the pseudocode, you can show what you intend to do without wasting too much time in doing so. You can change that code rapidly from input in the design review, or shortly thereafter.
For my design reviews I use a combination of pseudo code, uml flow diagrams and mind-maps. Given the original purpose, the speed of which to A) design code B) present your design C) standardized exchange of peer thought... This combination works well. In some cases the best solution really IS a diagram, especially in data flow. Mindmaps allow rapid changes in hierarchal patterns, especially live during a design review -- I have remapped live during several to the pleasant surprise of all. And pseudo code does well for structured organization of algorithms themselves.
The primary reason pseudo code takes on elements of compiled language is from habit. If you program Fortran, your pseudo code will resemble Fortran in at least some ways. If you are writing C, it will resemble C, etc.
HOWEVER, back to the original purpose of pseudo code, if you have to think of the syntax, no matter what language, then you have too many rules. If the pseudo code is longer than the actual resultant program, then either your code is too concise or your pseudo code too wordy.
It must be A) Fast B) easy. It is still supposed to be algorithmic coding "shorthand".
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
Okay! I give up! I can't figure this one out for the life of me. Please help.
I have a program that needs to convert two numbers to a single number and somehow still be able to figure out what the original two numbers were that the new digit came from. The catch is the single number has to be the same byte size as the original two numbers. 127 and 65 becoming ?? for example. I thought of just changing it to 127065, but that doesn't work. I also thought of using a seeded rand call somehow, but I'm not sure how.
If anyone has any ideas, or some sort of explanation as to why it actually is impossible, please let me know. I've been going mad over this for weeks.
The closest I've come is parsing the two digits into a string ("127065"), converting it back to a long and dividing it by an arbitrary number that brings it down to byte size. Trouble with that is, it returns a decimal value with is stored in 4 bytes, not one.
So yeah. I'm going mental. Please help, if you can.
"Go to, I’ll no more on’t; it hath made me mad." - Hamlet
|
|
|
|
|
You are about to invent the ultimate compression algorithm:
if you can combine two numbers into one in a reversible way, then
nothing prevents you from doing that again with a third number and your first result,
and you can keep adding numbers to it.
So you can put all information in the world into a single number. Waw. We don't
need those hughe RAM and disk memories any more.
Luc Pattyn
|
|
|
|
|
So for the sake of all mankind, and for my project, can it be done?
I'm thinking if it was possible, someone smarter than me would have thought of this already.
"Go to, I’ll no more on’t; it hath made me mad." - Hamlet
|
|
|
|
|
|
Kevnar wrote: The catch is the single number has to be the same byte size as the original two numbers.
A given number of bits can only store so much information. Eight bits, for example, can be arranged in only 256 different configurations. So if your original numbers are eight bit numbers - with a range of 256 different values each - it is impossible to store two of them in the same eight bits.
|
|
|
|
|
I'm not talking electronically, I'm talking programmatically, algorithmically, is there a way to convert two values into one, so that the original two can be extracted again, perhaps with some sort of encryption key?
"Go to, I’ll no more on’t; it hath made me mad." - Hamlet
|
|
|
|
|
Kevnar wrote: I'm not talking electronically, I'm talking programmatically, algorithmically, is there a way to convert two values into one, so that the original two can be extracted again, perhaps with some sort of encryption key?
I'm not talking electronically, necessarily, either. When I say "bit" I mean "piece of information" - that which distinguishes one thing from another. So if one thing requires, say, eight bits to "be itself", and another thing also requires eight bits to "be itself", the minimum number of bits that can describe both things simultaneously is sixteen.
The reason compression routines can apparently store more bits of information than they use is that they take advantage of the fact that in most pictures, files, etc, not all of the available combinations are used. For example, if you restricted your first number to 16 different values, and your second to 16 values as well, you could "compress" them into one eight-bit byte - but only by limiting the total number of combinations to 256 (16 times 16).
|
|
|
|
|
If you have two sets of, at most n-digits, numbers, then you have 102n combinations. That cannot map uniquely to 10n numbers or one n-digit number. If you can eliminate the square-root of the first set, then you will have a unique map.
"Marge, don't discourage the boy! Weasling out of things is important to learn. It's what separates us from the animals! Except the weasel." - Homer Simpson
Web - Blog - RSS - Math - LinkedIn - BM
|
|
|
|
|
"Go to, I’ll no more on’t; it hath made me mad." - Hamlet
|
|
|
|
|
Assume you have two three-digit numbers: [0-9][0-9][0-9] each. Then there are 10^3 combinations for each, including 000. Thus, there are 10^6 combinations for both, which you're trying to map back into one unique 3-digit number which can only have 10^3 combinations. In mathematical terms, you're trying to map n^2 numbers to n numbers uniquely, which is impossible.
There are II kinds of people in the world, those who understand binary and those who understand Roman numerals. Web - Blog - RSS - Math
|
|
|
|
|
Math guys will cite the pigeonhole principle here. Since I'm a math guy, I'll do it
It says that if you have n pigeons and m pigeonholes, with n > m , and you put each pigeon into a hole, you must end up with at least one pigeonhole that has more than one pigeon.
Let's apply this to your problem. You want to add two numbers in the range [0,255], getting a result in the range [0,255], and have the operation be reversible. Each "pigeon" will be a pair of numbers. Each "pigeonhole" will be a number in [0,255] representing their sum. Your goal is achievable if we can put all possible pairs into the pigeonholes and end up with only one pair per pigeonhole.
Let's assume it is possible and work towards a contradiction.
To restrict the sums to [0,255] let's consider just the inputs in [0,127]. The number of possible pairs is C(128,2)+128 = 8256. C(128,2) is the number of ways to choose two different numbers from [0,127], and 128 is the number of ways you can choose the same number twice from [0,127].
Since 8256 > 256, we know from the pigeonhole principle that there is at least one number in [0,255] that can be the result of more than one sum of numbers in [0,127]. This contradicts our assumption, so your goal is not achievable with the input set of [0,127].
Since the full input set of [0,255] yields more pigeons, we can conclude your goal is not possible over that set either.
|
|
|
|
|
This is all true in a sense, but there's always a way to work around it, if we're creative enough. Look at homonyms in the English language: one word can have any number of meanings, and our brain just sorts them by context. All we need is an agile enough algorithm to do the same.
Say we come to the number 63, which on one iteration expanded to the numbers x and y. Then, by altering the context on the next iteration, it could produce the numbers b and r, even though it's the same number. This is why I'm thinking a seeded rand operation could make it work... somehow. I just don't know how.
We just have to think outside the pigeonhole.
"Go to, I’ll no more on’t; it hath made me mad." - Hamlet
|
|
|
|
|
Are you disputing my proof or just ignoring logic altogether?
|
|
|
|
|
Both maybe. I don't know to be honest. Mostly I'm just thinking out loud.
"Go to, I’ll no more on’t; it hath made me mad." - Hamlet
|
|
|
|
|
Do you have limits on what the input values are? You example mentions "127" and "65". If the input are bytes and the input values restricted to 127 or less then New value = Value 1 << 4 + value 2
But I'm guessing it's not that easy
cheers,
Chris Maunder
CodeProject.com : C++ MVP
|
|
|
|
|
The values are all bytes... 0 - 255. Both in and out. I believe the 127 thing was just an example someone used.
The general consensus so far is that it can't be done. I'm of the opinion that you can do anything if you're creative enough. I'm open to being wrong of course.
The main argument against it so far, if I'm understanding them, is that you can't map 255 * 255 combinations onto 255 destination points. I'm thinking it can be done with a seeded rand function somewhere in the mix. Sure a + b will always equal c, but if you throw in x somewhere in a predictable and repeatable way, you can get any result you want for c, and you can work backwards again to figure out what a and b were.
I just don't know how to do it, and that's the problem.
"Go to, I’ll no more on’t; it hath made me mad." - Hamlet
|
|
|
|
|
Kevnar wrote: The values are all bytes... 0 - 255.
Then you're screwed. You can't fit two eggs in the same cup at the same time.
Sorry mate
cheers,
Chris Maunder
CodeProject.com : C++ MVP
|
|
|
|
|
This is probably why I haven't figured it out on my own yet. I'm usually pretty agile at making complicated things work. This one just handed my ass to me though.
"Go to, I’ll no more on’t; it hath made me mad." - Hamlet
|
|
|
|
|
Not doable, but it reminds me of "Contact" by Michael Crichton
If I follow the theory properly, the aliens do something like that, but it's only prime numbers, I think added together, and eleven of them at a time.
So you take a value and find the eleven primes that add up to it... but you have no idea what the original order was. At any rate, it's also an unworkable process.
|
|
|
|
|
Simple answer is no. Long answer is maybe.
For example, if your number A is guaranteed to never reach 19 and the other number B is guaranteed to never reach 13 then you can add them with the following formula:
C=A*13+B
The value C always fits in 1 byte. The individual values can be recovered with the formula:
A=C/13
B=C%13
This is not a simple bit combination, since A needs 5 bits, B needs 4 bits, and C needs 8. But, to be honest, the largest A times the largest B cannot exceed the 8 bits of capacity of C. This can, of course, be generalized to other bit sizes.
So, no magical compression here, and no violation of the principle of the conservation of energy in the universe.
All forms of turbo-coding (ones that use state machines behind the sceenes to help determine what the numbers could be when spliting them back again) like the ones I've seen posted here (mentioning rand) do not guarantee you can recover the original exactly. So, to compress executables I wouldn't use turbo-coding, but for audio files then maybe... if the state machines are clever enough.
Rogério Rilhas
|
|
|
|
|
So it works, but only with numbers from 19 and under. Hmmm. Well, unfortunately it's gotta work with numbers between 0 and 255 somehow.
I've been thinking about this for the past month now, and the naysayers seem to be correct. It can't be done. But the more I think about how impossible it is, the more I feel like there's some sort of work-around, some back door solution to it that we're all just not thinking of.
So I'll keep thinking about it. Thanks for your help. Let me know if you think of anything else.
"Go to, I’ll no more on’t; it hath made me mad." - Hamlet
|
|
|
|
|
He he he... well.. to be honest... I'm with the naysayers. It is a powerful argument that it cannot be done, and I am one to think that the argument cannot be defeated.
I thought about the more general formula and for you to be able to acomplish fitting 2 numbers A and B into an 8-bit value then it is necessary that max(A)*(max(B)+1)<=255 (the most accurate version needs more elements). You can achieve that with any pair of integer values for A and B that satisfy the condition, not necessarilly below 19. You could have A in the range [0;127] and B in the range [0;1], for example. That would work, because max(A)=127 and max(B)=1, so max(A)*(max(B)+1)=127*(1+1)=254. Or A in [0;50] and B in [0;4].
The closest thing to a work-around I can think of is this trick. But it is a trick. This is because I'm adding the additional information (assumption) that A and B are limited. An 8-bit value can store a given amount of information (lets call that amount 256). Whatever you do with the numbers cannot exceed a global mount of information greater than 256 otherwise it will not be reversible. The trick I presented is that the 2 numbers A and B, together, do not posess more information than 256, so they can fit in an 8-bit value reversibely. The external information required is the limits of the 2 numbers, which, of course, will be visible in the code (the operands for the / and % operations).
The fact that the sum of the bits necessary to represent A and B exceeds 8 is not relevant. For example, I need 5 bits to represent the number 17, but a lot of combinations of those 5 bits are not being used. So, I can use that "extra storage space" to accomodate part of another number. But the numbers should not "overlap", and that is why I should use only the combinations of 5 bits that I din't use before (like 18, 19, 20, ..., and 31) for the other number. This is why I have added the condition where their multiplication is below 256.
Other work-arounds will also need external knowledge. For example, if you know that the 2 numbers A and B are always double of the previous 2 numbers a and b, then, of course, you need 2 bytes for a and b and zero bytes for A and B. So, you successfully compressed 4 bytes to 2 bytes reversibely, since when you receive the 2 a and b bytes you automatically decode A and B without any more bytes.
This external knowledge is the basis of turbo coding, and can be, in fact used, to add extra information to the transmitted bits. This information is not much though, so turbo coding is maily used for error correction (where you lose only a few bits and so the little external knowledge that can be provided can help recover the values that those lost bits should have).
Other forms of external knowledge exist. For example, windows executables always start with the characters M and Z. These 2 characters can be easilly compressed if your application is compressing windows executables. And if the executable was built for INTEL processors than you can compress many of the bytes by knowing which instructions are valid on INTEL processors (and assume the executable will not contain invalid instructions, which is normal). If built for other processors the same reasoning applies. So, as you can see, many forms of external knowledge can be used, but if the external knowledge must be transmitted (telling the decoder that it is dealing with INTEL Windows executables, or MOTOROLA Macintosh executables) then you usually loose more than you gain.
So, in general, given 2 numbers A and B each with any random value between 0 and 255, you cannot join them with whatever operation into a single 8-bit value and later recover them back individually. This is a pretty well know fact in the theory of information (there are mathematical tools and thechniques to analyze how much information somthing really has).
You can mess with the "random" (thus introducing som external knowledge) to try and make things possible (like I did limiting the range for A and B), but there is nothing you can do to the "operation" if you do not change the "random". And there is not a lot of room for creation. You will have to alter the "random" factor to achieve anything new, and at this level (of external information) you are completely free to try out new stuff.
Anyway, I now see this message is getting quite long. I sugest you take a look at http://en.wikipedia.org/wiki/Information_theory[^] for a simple introduction to information theory.
Rogério Rilhas
|
|
|
|
|
Thanks Rogério.
The thing that convinced me that it can't be done is thinking of it in terms of a 256 x 256 grid, with numbers from 0 to 255 down the x and y axis. Each combination of two numbers needs to be unique in order to be retrieved. But 255^2 is 65025 necessary combinations. Therefore impossible, right? That's what someone pointed out in the discussion above. I finally understood what they were referring to while thinking about it on the bus.
However, all that means is that it's impossible while trying to organize the information on a grid. If there's one thing the history of scientific discovery has taught me, it's that there's always another way, if you search long enough, think deep enough, and get creative enough; there's always a work-around somewhere. I may never discover myself, but someone will.
All I know is that it's definitely gonna require some external information.
"Go to, I’ll no more on’t; it hath made me mad." - Hamlet
|
|
|
|
|