|
Okay -- you have a PvP game in which you can gather several statistics.
First you may want to incorporate some additional stats under the covers that cannot be seen. For example -- time between deaths, kills between deaths, etc.
Now part of you weighting will be number of players participating at given times within the game.
Finally you would have your standard stats which everyone can see (total kills, hit accuracy, etc.)
You will have a bell curve for you weighting system. Too few or too many players and the stats are badly stacked. But in the sweet spot you would gain the highest scoring.
At the end of a match (assuming this is timed matches) you would first compute the weighting factor based on number of players against the bell curve * (time between death / factor ) * ( kills between death / factor ) -- or some such formulae as that.
Now take your weighting factor and apply it to the sum of your key stats * each stat weight factor.
This way when you hit the sweet spot a player who lives a long time, kills many people during that time, and has a high chance of being killed (the number of players playing without a ripe field of fodder) will get a higher ranking than a player that finds someplace to hide and only nails a few players.
Just my opinion.
|
|
|
|
|
I need to find what the closest point is for each of a series of points with 2 dimensional coordinates. To give a real world example say you took a picture of the stars at night and you wanted to find what star appears to be closest to each star when you look at them (this is not the real application just an example).
Obviously the easy way to do this would be to find the distance from every point to every other point and then sort the differences but this could be very slow if there is alot of points since the number of comparisons would increase exponentially. I have a few ideas on quicker ways to do this but I wanted to make sure I'm not overlooking some existing algorithm that would be good for this problem. Actually I'm not even sure what the name of this problem is (if it has one). If anyone knows the name or knows of some existing algorithms it would be a big help.
Thanks,
Mike
|
|
|
|
|
Someone may know about established algorithms, but one way to go would be to organise the 'star' co-ordinates in boxes on a grid. This way you would only have to search a small number of boxes dependent on the co-ordinates of each variable point and the nearest point found in each box. If you have variable densities of 'stars' then you could nest your boxes in some areas.
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
Unfortunately, Peter, this is not sufficient. Even comparing the smallest distance between any of the stars in all of the grid boxes may miss the case where a star in one grid box is immediately adjacent to another star in another grid box. You also need to make a quick comparison at the end inside a narrow band that spans the grid lines, the width of the band being the smallest distance between two stars found thus far.
Many, many optimizations also need to be done in the algorithm to avoid expensive calculations. With 9100 6.5 magnitude stars, about 4500 should be visible from any point on earth. O(n) of 4500 is about 10,125,000, and that is a lot of calculations.
What I was concerned with, is not that these were stars (~4500), but (as stated) this is an analogy for... Maybe the real problem was one in physics where the points were atoms in some condensate and numbered in the billions and the distance was to determine attractions.
Dave Augustine.
|
|
|
|
|
Member 4194593 wrote: Even comparing the smallest distance between any of the stars in all of the grid boxes may miss the case where a star in one grid box is immediately adjacent to another star in another grid box
That's why I said in my post that you would only have to search a small number of grid boxes - not just one.
I haven't thought about it much, but one way to start would be to search it's own box first, and if a star is found that is closer than any boundary then you are finished, otherwise you have to search any box that intersects the sphere to the nearest star you have found.
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
I think that the point having such property is the average one, i.e.
{X<small>avg</small>, Y<small>avg</small>}
where X<small>avg</small> is the mean value of the X coord of the points (the same consideration applies to Y ).
Of course that point as zero probability to coincide with one of your set. But if you choose the point in the set that is nearest to that average point, then you have (IMHO) a very good candidate for fulfilling your requirements.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
[my articles]
|
|
|
|
|
Look at using a multidimensional binary tree (also called a kd-tree). It allows search by any number of criteria (in your case, two; the two dimensions), as opposed to a normal binary tree, which only uses one.
|
|
|
|
|
Simply use a point where x is the average of all the x coordinates and y is the average of all the y coordinates. For example, in standard C++:
#include "stdafx.h"
#include <iostream>
#include <ostream>
#include <iterator>
#include <algorithm>
#include <numeric>
using namespace std;
struct point
{
point()
: x(0), y(0) {}
point(double x, double y)
: x(x), y(y) {}
double x;
double y;
};
point operator+(const point &lhs, const point &rhs)
{
return point(lhs.x+rhs.x, lhs.y+rhs.y);
}
point operator/(const point &lhs, double rhs)
{
return point(lhs.x/rhs, lhs.y/rhs);
}
ostream& operator<<(ostream &os, const point &pt)
{
os << "(" << pt.x << ", " << pt.y << ")";
return os;
}
int main(int argc, char* argv[])
{
const point pts[] =
{
point(1.0, 2.0),
point(2.0, 1.0),
point(3.0, 3.0),
point(4.0, 4.0),
point(-1.0, 3.0)
};
const point *pEnd = pts + sizeof(pts)/sizeof(pts[0]);
ostream_iterator<point> osi(cout, "\n");
copy(pts, pEnd, osi);
cout << endl;
point pt;
pt = accumulate(pts, pEnd, pt);
pt = pt/(sizeof(pts)/sizeof(pts[0]));
cout << "Closest point is " << pt << endl;
return 0;
}
Steve
modified on Monday, January 14, 2008 6:25:06 PM
|
|
|
|
|
I don't think that meets the problem requirements; the result must be one of the existing points of the input.
"...say you took a picture of the stars at night and you wanted to find what star appears to be closest to each star when you look at the..."
|
|
|
|
|
Anyway it gives a good hint for starting the search.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
[my articles]
|
|
|
|
|
Thank you everybody for your suggestion. cp9876, that was the same idea I had in mind, I guess great minds think alike .
On the other ideas that suggested averaging or something similar I'm not sure I understand your suggestions and I'm not sure if there was some confusion about the question I asked.
Say there was 100 'stars'. Were you guys suggesting a way to find the one 'star' that has the shortest total distance from all the other stars or were you suggesting it as a way to find 100 'stars' one for each of the 100 'stars'? My problem is more like the latter.
In either case I appreciate everybody taking the time to help me and I appologize for any ambiguity in my original question.
thanks,
Mike
|
|
|
|
|
Most of the others were solving the problem of which point is 'nearest' in some overall sense to the given points. If you aim to minimise the sum of the squares of the distances to the given points then you get the average solution.
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
hey, you should use a Delaunay triangulation. By definition, one of the stars that is coincident to a given star in the triangulation, call it MY_STAR, will be closest to MY_STAR. So your algorithm would be:
1. triangulate o(n lg n)
2. for each star, look at all coincident stars, and take the min distance
there are o(n) triangles, thus o(n) edges (another property of the Delaunay), so this step is o(n)
This should give you a loglinear solution, which seems optimal.
|
|
|
|
|
Wouldn't it be an O(n) operation to simply compute the distance of every single star, then select the minimum?
--- Modified realization
Oh, sorry. I didn't notice that he wants to find the closest star to ALL the other stars. I thought he was just looking for the closest star to a particular point.
Sounds like somebody's got a case of the Mondays
-Jeff
modified on Monday, January 21, 2008 3:24:22 PM
|
|
|
|
|
Hello,
I have a function that takes three variables and gives back two variables. Something like (a,b) = f(x,y,z). I'd like f to be like a look-up table. This means that I do not have mathematical representation of f. So the question is how do I represent such a look-up table efficiently, given that I would like to have an interpolation between existing values.
In 2d case when I have a = f(x) I can represent f as a bezier curve (which I can create and edit manually). And when I load it up I create a map m[x] gives me a. I use interpolation if x is not in the map.
My solution so far is based on 2d case, which means that I have a map like that m[x][y][z] (basically map of map of map of a pair (a,b)).
Is there a library that works with such issues in C++? Something that already provides similar functionality efficiently? I'd like to be able to create f(x,y,z), save it, load it and use it.
Thanks a lot.
|
|
|
|
|
It depends wether your f needs to have specific properties.
In the most simple case you can use linear interplation over both componentes (a and b) (just like you interpolate 3d-surfaces with triangulation / polygons ).
If f should be C^1, C^2, ... things get nasty and you might have to use some powerfull numerical interpolation methods.
Even in the most basic case you would have to find the 4 closest fixed points (see the other thread on this forum) and linear interpolate on those.
|
|
|
|
|
Thanks CKing! In my particular case linear interpolation is sufficient. I'd like to see if there is a library that deals with similar issues so I can reuse it.
Thanks.
Regards,
Alexander.
|
|
|
|
|
hello friends,
i read the algorithm to find Longest Monotone Subsequence which helps to find the Monotone increasing subsequence in a set of numbers in a list..
will u plz explain the concept of it..
|
|
|
|
|
|
hi there,
i am doing a application to demonstrate communication could be possible without encryption.. i refered to many articles and it is said it is possible by means of qkd.. can any1 suggest me a good algorithm for that.. it has to b easy to implement also. cos have to complete it b4 feb 2nd week.
helps of any form are welcome...
Thank u,
Freak8802
|
|
|
|
|
I'm not sure what you're trying to ask. Almost all communication is done unencrypted. If by QKD you're referring to using Quantum Key Distribution to have unbreakable encryption, I hope you have a 6 figure budget because the hardware needed to do it is anything but cheap. I also suspect there's a rather long lead time on the manufacturing of the QKD boxes. Alternatively with a research grade photonics lab you could try and make your own, but I don't think that'd be viable in only 2.5 months.
--
Join the Campaign to Help Stamp Out and Abolish Redundancy
The preceding is courtesy of the Bureau of Unnecessarily Redundant Repetition Department.
|
|
|
|
|
i shud have put myself clear then.. i have no idea in this regard.. i just no they have to be done with quantum encryption not any mathematical encryption.. can u guide if u have any src in this regard..
help of any kind are welcome..
Thank u,
Freak8802
|
|
|
|
|
QKD isn't encryption in and of itself. What it is, is a way to create and exchange one time pad keys (random numbers each used to encrypt a single byte and never used again). Due to the low bandwidth of current systems the OTPs are not used directly to encrypt the data but are instead used to exchange really long keys for symmetric encryption keys that are used over normal highspeed communication channels. The QKD hardware is used to provide new keys frequently enough that each symetric key isn't used for long enough to allow for pattern analysis to break it.
The QKD is done using expensive hardware ($100k and up last time I looked), and IS NOT something that can be implemented in software alone.
--
Join the Campaign to Help Stamp Out and Abolish Redundancy
The preceding is courtesy of the Bureau of Unnecessarily Redundant Repetition Department.
|
|
|
|
|
dan neely wrote: The QKD is done using expensive hardware ($100k and up last time I looked)
I think that is on the low side, too. Just for starters.
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
|
|
|
|
|
Do you have access to lasers and stuff like that ? What about photon polarity and the Heisenberg uncertainty principle ? what about Bob and Alice ?
From what I understand, it's a lot more than just source code; and probably what is available right now must be under lock and key in the hands of a few companies and research departments around the world.
|
|
|
|