|
thanks for your help and just to let you know i have tried to do it myself and the idea of this site is to ask for help.we are not all as clever as others when it comes to c++.
thanks for your advice
klck2000
|
|
|
|
|
klck2000 wrote: we are not all as clever as others when it comes to c++.
Nobody is gonna write a whole program for you. Most people tend to rather you put some effort into solving the problem yourself, rather than just post your homework question so other people can do your dirty work while you don't learn the material.
None of the uber programmers started off as such you know. It takes effort.
BTW, if you don't understand something, why can't you ask your teacher for clairty? I mean, that's what they are there for right?
Jeremy Falcon
|
|
|
|
|
Calculating the mean and median are a fairly easy task. Standard Deviation is a little harder.
What do you need help with, exactly what don’t you understand how to do?
Before you answer that, write out some pseudo code and go through the logic.
Post a specific question that proves you have attempted this on your own, otherwise I don’t think you will get much help.
If you need links to information for standard deviation you can find more then enough information using google. However, I will start you off.
http://www.med.umkc.edu/tlwbiostats/variability.html#deviation[^]
Good luck and enjoy the challenge.
will
I hate users. Not all of them, just the ones who talk.CP member: Al Einstien
|
|
|
|
|
Given two arrays, containing equal number of elements but not in the same order of appearance, what is the fastest method of checking whether the elements in the arrays match or not?
Note - This should be accomplished without sorting the arrays.
Cutebug
|
|
|
|
|
There are only two efficient ways do compare such datastructures: building two efficient ones (like trees or hash table) or by sorting them. Why don't sort, it would be the fasted? Is it for school, I can't imagine another problem with such "curious" restrictions???
Perhaps you should go a little bit into detail, perhaps we can find another solution then.
Regards,
Ingo
------------------------------
PROST Roleplaying Game
War doesn't determine who's right. War determines who's left.
|
|
|
|
|
I'd do the following:
another array of the same size as the first two. used to store flags (say: all values initialized as 'o' and they are to be set to 'x')
if array1.size != array2.size
abort with message "not identical"
arraysize = array1.size
For i = 0 to arraysize
iterate over array1 with i
For j = 0 to arraysize
iterate over array2 with j
if array1[i] == array2[j] && array3[j] != 'x'
array3[j] = 'x'
else if j == arraysize - 1 abort with message "not identical"
else continue
message "identical"
(i wrote this down from memory. if this doesnt work, my memory failed me.)
Basically, you need to examine each element of the first array and see if there is a match in the second array. If there is a match, you need to exclude that element from further comparison (so you can have multiple identical objects in one array and you dont match one object against 2 others). The check is over once you reach the end of the second array without finding a match for an element of the first array.
Since the arrays are unsorted, you need additional space for each element of array2 to remember if it was already used for a match. Also, in the worst case, you have to make (n² + n) comparisons and n assignments to compare both arrays.
if you had 2 containers with sorted contents, you would not have to make any assignments and only n comparisons in the worst case.
Cheers,
Sebastian
--
Contra vim mortem non est medicamen in hortem.
|
|
|
|
|
hash each element, count the hashes, if number of hashes is 2 for every number of hashed entries, you have identicle arrays, this trades memory for time. You said fastest, but made no requirements for memory, so use a good and fast hash like superfast hash. you can always terminate the hash count if you ever find a 1 count unless you need to know how many differing items occur. allocate your hash at 2.5 times n (n-length array), which gives you the worst case possibility that there are no matches between the two arrays.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
The only problem with this approach is if 2 elements from array 1 have the same hash, and 2 elements from array 2 have the same hash (but the elements from array 1 do not match those of array 2 -- that is, they have differing hashes), you will end up with 2 hashes that each match 2 elements, but those elements wouldn't have a match in the corresponding array.
That problem goes away if the elements in each array are guaranteed to be unique.
If you decide to become a software engineer, you are signing up to have a 1/2" piece of silicon tell you exactly how stupid you really are for 8 hours a day, 5 days a week
Zac
|
|
|
|
|
The problem also goes away if you add in separate bytes, high byte, low byte, this differentiates and still limits the search. However, uniqueness is still a matter of length of items and hashing algorithm and room for hash. Generally you expand your hash table to provide for uniqueness.
He was asking the fastest method, brute force searches guarentee correct results but are always the slowest operation. Thus, hashing provides a fast interface to a 3n access, hash+hash+test, where as a brute force search is done in powers, especially with unsorted. Sorted arrays generates sorting over-head, plus comparision overhead, reducing the speed. hash is a way to access sorted speed without a sort.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
I think you misunderstood what I was getting at (or I misunderstand your thoughts ... either way). What I was getting at is that if you just hash every element in both lists and then compare each hash element to see if it has 2 results, you run into a problem when the following scenario is played out:
Contents of list 1: X, X
Contents of list 2: Y, Y
After hashing the lists, your hash table will look something like:
Hash1: X, X
Hash2: Y, Y
You now have 2 elements in each hash, but the lists are obviously not the same. It is possible to get around this problem, but then you decrease the efficiency to the point where another algorithm would be a better option anyway.
If you decide to become a software engineer, you are signing up to have a 1/2" piece of silicon tell you exactly how stupid you really are for 8 hours a day, 5 days a week
Zac
|
|
|
|
|
Zac Howland wrote: You now have 2 elements in each hash, but the lists are obviously not the same. It is possible to get around this problem, but then you decrease the efficiency to the point where another algorithm would be a better option anyway.
That is why I said you can have a different counting method. By counting in a high byte/low-byte format
Hash1: X,X will produce a count of 2 ==00000000 00000010
Hash2: Y,Y will produce a count of 512 ==00000010 00000000
a proper match, one from each, would produce a count of 257 == 00000001 00000001
What I was saying is you can adapt the count to produce a good result, bit math is still faster than sorting. providing a method for adapting to exception is sometimes faster than ignoring a method because it has an exception. In this case because the count is simply an adapted bit operation it is very efficient. A proper Hash algorithm produces a unique result about 98% of the time, so the 2% exception is either adapted by a larger hash-storage destination, or in this case since the count can be adapted easily, by the resulting operation the hash uses.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
use two int array with their size equal to the size of the character set, sets the initial value of them to zero,then run through the two array,count the appearence of each character into these two int arrays.Then ,Just do a simple compare of these two arrays to know if the two array matchs.
Magic.D
|
|
|
|
|
The fastest way to do this would be to sort the arrays. Since you are excluding that ability, here is the next fastest and least memory intensive algorithm I can come up with:
linked-list of pointers to elements in array1 (list1)
linked-list of pointers to elements in array2 (list2)
boolean listsAreSame is true
while list1 has size > 0
iterator for list1 (list1Iter) = first element in list1
iterator for list2 (list2Iter)
boolean tester is false
while list2Iter != endOfList
if *list1Iter equals *list2Iter
remove both elements from lists
tester is true
end while loop
endif
endwhile
if tester is false
listsAreSame is false
end while loop
endif
endwhile
If you decide to become a software engineer, you are signing up to have a 1/2" piece of silicon tell you exactly how stupid you really are for 8 hours a day, 5 days a week
Zac
|
|
|
|
|
The following is n^2 and assumes uniqueness.
int count = 0;
for(int i=0;i<a.Length;i++){
for(int j=0;j<b.Length;j++){
if(a[i] == b[j]){
count++;
break;
}
}
}
if(count == a.Length)
-- modified at 15:52 Thursday 17th August, 2006
-- modified at 15:53 Thursday 17th August, 2006
A man said to the universe:
"Sir I exist!"
"However," replied the Universe, "The fact has not created in me A sense of obligation."
-- Stephen Crane
|
|
|
|
|
I posted this in the wrong section, but have deleted that message and moved it here.
Here is a link to the definition of the Snowball effect:
Here[^]
--
Good morning, afternoon or night depending on what part of the world you are from.
I have been spending a lot of time thinking about the snowball effect. Now it would seem relativly easy to simulate the snowball effect using some C# Kung Foo but..alas it is not.
My Question:
1) I know the Size(s) of the object at it's smallest point.
2) I know the distance(d) the object will travel
3) Force of gravity on the initial object
What I dont know:
1) Increase of both Mass(m) & Size(s) as the object Travels for a specific distance(d)
2) Increase of Speed(s) as the Object gains in both (m) & (s)
3) Distance(d) and Speed(s) required for the object to gain a specific Mass(m) and Size(s)
I have tried to make the rate of increase(i) in mass(m)&Size(s) linear.
So the object will increase(2x) the initial Size(s) for each Foot(d) traveled.
But I am unable to calculate the increase of speed(s) as the object moves because of a missing component...Time(t).
Can anyone think of a good way to simulate the Snowball effect?
Thanks,
Will
I hate users. Not all of them, just the ones who talk.CP member: Al Einstien
|
|
|
|
|
are you simply trying to visualize "a snowball effect" or are you really trying to figure out the problem? You simply need to choose a friction value for your snowball the same as you chose a rate of increase. Rate has a component of time, so you know time as well by taking your rates and incrimentally changing them over minute time segments. This is fairly common for calculation of acceleration/jerk/etc.
Instead of Gravity as acceleration, use Gravity as a force. Force, mass, incline and friction are used to calculate momentum. Loop over minute time segments recalculating your distance travelled along with all your changes, until the distance is covered, then you have your final values.
You can stop your itterative procedure at any time to answer mass, size, distance or speed when trying to reach a specific value.
But you will have to make up a number for rate of increase in size, friction and slope of incline.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
Yes I would like to solve the problem I have presented.
Thank you for the advice.
I am taking notes and will try again.
I will also look into formulas for acceleration as it never occured to me to do so.
Will
I hate users. Not all of them, just the ones who talk.CP member: Al Einstien
|
|
|
|
|
I got my math degree *mumble* years ago, but I'd be hard-pressed to do a simple integral these days. The last time I can recall using any serious math knowledge was a couple years ago, when I noted during a code review that an algorithm was doing Σ<small><sup>n</sup><sub>i=1</sub></small> i steps, and was therefore O(n2)
--Mike--
Visual C++ MVP
LINKS~! Ericahist | PimpFish | CP SearchBar v3.0 | C++ Forum FAQ
|
|
|
|
|
Michael Dunn wrote: I got my math degree *mumble* years ago, but I'd be hard-pressed to do a simple integral these days.
I've forgotten some thing, too. But I know where to look, when I have to do. You can't remember everything so it's ok, I think.
Michael Dunn wrote: The last time I can recall using any serious math knowledge was a couple years ago, when I noted during a code review that an algorithm was doing Σni=1 i steps, and was therefore O(n2)
Was it Ω(n²)? Well then it would have been ok, I think. Well it could have been worth - it could have been NP.
Regards,
Ingo
------------------------------
PROST Roleplaying Game
War doesn't determine who's right. War determines who's left.
|
|
|
|
|
Well, I never really learned hardcore math growing up. For me, all I can say is, at least it's better late than never.
Jeremy Falcon
|
|
|
|
|
For me it was 10 years ago and I still go back and keep up with math problems just because it is my true love. About 3 years back I was thinking about going for a phd in physics and got as far as taking the physics GRE. I managed to get a 710 out of 1000 on it, but was shooting for an 800, still not bad for someone dusting off the cobwebs.
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|
|
Amen. I did my BS and MS in Mathematics. I was never a typical A student since I never cared about being one. I was mostly B's with a few A's. However, some of my friends (who mostly were A students) were thinking about getting their PhDs in Math (back in 1995) and took the Math GRE. I took it with them since I figured what the hell. Aside from a Russian student who nailed it, I was able to get a 910/990 which was the second highest grade of the students that I knew (about 5 out of 10 who took the test). The next highest grade was a 780. When I realized that I had the potential since I was always good at homework and research, I decided it wasn't something I wanted after all. The only reason I didn't nail it was because I hadn't taken abstract algebra which was a big part of the test and the toughest Math class anyone could take, and I hate not guessing even though incorrect answers count against you whereas skipped ones count as zero. If I ever win the lottery, I would probably quit work and go back to school for various degrees, a PhD in Math being one of them.
Mathworld is a great resource for keeping up to date on Math. I also recommend subscribing to the Mathematical Association of America (MAA). 90% of their articles are way above my head, but the other 10% makes it worthwhile. Also, take a look at my Math sig link to see the books I've read. I recommend these as a great way to keep up-to-date on your Math skills.
"I know which side I want to win regardless of how many wrongs they have to commit to achieve it." - Stan Shannon
Web - Blog - RSS - Math - LinkedIn
|
|
|
|
|
Man, 910 is awesome. I took that right out of school and got somewhere in the 720s. I had forgotten most of my integral formulas and stuff like that since I really hadn't had those since high school. I had a hard time keeping focused on class when there was such a great library so close by. I'd have 20 math and physics books checked out all at the same time and spent most of my free time reading up on whatever interested me at the time.
I'm currently going back over 2d hyperbolic geometry as it relates to relativistic velocity space. Going through and building up rotations and translations out of reflections represented in RP2 with a (1,2) metric.
One of the things I puzzled over the first time I saw it in physics was that adding 2 non-colinear relativistic velocities resulted in an object which appeared rotated w.r.t. a stationary observer. Once I looked at it in terms of hyperbolic space I realized it was just another way to state that hyperbolic triangles have total angle < 180 degrees because of the curvature of the space.
If I won the lottery, I'd do exactly the same thing as you.
I can imagine the sinking feeling one would have after ordering my book,
only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon
|
|
|
|
|
I tend to forget anything i don't use frequently. A year ago i was setting up a simple CAD-style view in our app, and needed to work out some transformation formulas. All stuff i'd learned years ago, reading CG P & P and making sense of it while raking hay. It took me a solid week to get it back.
|
|
|
|
|
Michael Dunn wrote: I got my math degree *mumble* years ago, but I'd be hard-pressed to do a simple integral these days.
I took a D in math, partially because of some college politics at the time... but now I use math even more. I never really took math seriously, I honestly believed at the time that I would never use my math again, so resented having to take it all. So part of it was my attitude too. I was a B student in any other college, because of my attitude I just couldn't manage to get the 92% required to pass Calc I with a C during that particular year. So I left the college (and the college got in trouble the following year for their practices at the time).
Now my curse is I get to use everything I always said I would never use, daily. Physics, integration, differential equations, quadratics, etc. I brought quaternions to where I work and got them accepted as a standard, now I am reviewing other applications for quaternion and octonian and other hypernumber/Cayley math operations.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|