|
right now, i know how to get the mouse location and show the mouse and so left clicj right click and such. I am wondering if anyone had a formula for an AI for a path, eg
o = you sprite
| = wall can pass through
' = floor can pass through
* = path way for the spite
C = the cliked for the path to go
heres my example
o'''''<br />
'*|'''<br />
'****C
I hope you understand basicly i want to it go a certain location it will go there, and if there is an object in its way it will use a formula to get around it.
all this nis in 2d by the way.
and if you have the formula can you please make it as simple as possible?
|
|
|
|
|
|
ty, a* seems the most simple, but it still is allot and will take a while to understand
|
|
|
|
|
|
|
So I'm working on a side project at the moment dealing with computer vision, and I find myself needing to identify circles of an unknown size in an image. I've found a lot of information online about using the Hough Transform for circles, and MANY variations of that transform. Is there anything else out there that can be used for this purpose? I'm looking for something else that is quicker than the Hough Transform, and I am willing to sacrifice some accuracy to achieve this.
Please note that I am not looking for a library or tool to do this for me (like OpenCV), I've found plenty of them, and they all use the Hough Transform. I'm looking for an actual algorithm or related research.
Be The Noise
|
|
|
|
|
AFAIK Hough is the best available. When the circles are prominent, i.e. have quite some thickness, you could reduce the resolution of your image so the thickness of the circle(s) becomes say 2 pixels; that should provide quite some performance improvement.
And of course image processing is a field where you can efficiently apply multi-threading, as well as gain performance by putting locality of reference first (i.e. deal with bands or small areas, not entire images at once).
|
|
|
|
|
Thanks for the suggestions I'll definitely try reducing the resolution and try to utilize more multi-threading (this is for a mobile app, and the benefits of multi-threading aren't THAT great). I've been playing with blur and color changes as well to speed things up.
Be The Noise
|
|
|
|
|
You should also think of using the GPU for such tasks which can improve the performance a lot.
|
|
|
|
|
|
The erosion operator (http://en.wikipedia.org/wiki/Erosion_%28morphology%29[^] ) can detect circles faster than the Hough Transform.
You have to know the size in advance, though, although you can do N searches for N different diameters. (You'd have to do N searches for different sized circles using the Hough Transform also.)
Are the circles drawn as just the circumferences, or are they filled in?
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
Very nice, thanks I'll check it out and let you know if it works out. While the sizes of circles change a bit, it's not too bad to just go through a few diameters.
The circles are filled, though I could do an edge detection to get rid of it if needed.
Be The Noise
|
|
|
|
|
The Wikipedia article makes it look harder than it is. Erosion (binary) can be easily implemented as only shifts and ANDs.
To recognize a circle:
1. Take an arc that's half the circle's circumference, and divide it into N segments. Each segment is a short vector.
2. For each vector, shift the image by that vector and AND it with the original image.
3. When you're done, pixels will remain only at the regions that were at the center of (at least) a circle of the original size.
4. Starting at the higher diameters will enable you to remove them first, so you can recognize the smaller diameters later.
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
haha, you must've been reading my mind
This makes it much easier to implement. Thanks!
Be The Noise
|
|
|
|
|
Looking at this again, I realized Step 2 could be misinterpreted:
"2. For each vector, shift the image by that vector and AND it with the original image."
By "original image", I mean the image before the shift.
So,
foreach (vector in Vectors)
{
previousImage = image;
image.shiftBy (vector);
image.andWith (previousImage);
}
And all remaining pixels in 'image' are contained within (at least) a circle of the given radius.
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
Hello,
When it comes to image processing tasks, I would say that it is much easier to discuss when there are few sample pictures available (if there are no some confidentiality restrictions of course). Talking about circles ... in some cases you can simplify things a lot by finding stand alone blobs/objects in a picture and then doing further shape analysis of those ...
|
|
|
|
|
Hi Andrew,
There is no confidentiality, and I have many samples of the images, but it would probably be easier to get some samples yourself. I'm working on a mobile app to identify traffic lights and tell me what color it is as I drive. I've found a lot of research on the topic, but most of the research methods use extra computers in the trunk of the car, so it doesn't work too well on a consumer smart phone.
I've actually been using some of the algorithms in the Aforge library to identify the circles (great work by the way). Reducing the resolution before I use the camera, and some blurring have helped a lot. I also use some color filtering to make sure I'm only looking for the colored lights within a certain threshold (Red, Amber, Green). I've also been toying with the accelerometers to do some course localization so I don't have to scan the entire image. All together, I'm getting some decent results, but I still need to put in a lot more time on the project. This is just something I'm doing for fun, not anything work related.
Right now I'm really dealing with false positives due to street lamps, and other car break lights, which is another reason I've been trying to localize the scanning. I'm also working through some instances where if the traffic light is back lit by a street lamp at night, or the sun during the day, it makes it very hard to spot; but I'm thinking some white balance can help with that.
Thanks for chiming in! If you have any ideas that you think may help with this, please feel free to pass it along!
Be The Noise
|
|
|
|
|
You know that the green in traffic lights actually has got a lot of blue in it as well. Fo r the colour blind.
|
|
|
|
|
does anybody know an algorithm to recognize trends in 2-D Line charts? Something that, for example in this chart returns an array with coordinate-Pairs A/B and B/C?
|
|
|
|
|
If you know what form of equation the data should follow, least squares (Google has some good references) will fit an equation set of data. Can need some matrix juggling, but that's what computers are for...
|
|
|
|
|
I'm going to guess this is to do with financial markets. I've been there myself, and I'll warn you, those trends are not nearly so real as the eye makes them look!
The simplest approach is to smooth out the 'noise' (all the little bumps between B and C) by applying a moving average, gaussian smooth or similar to the data, and then look for peaks and troughs in the smoothed signal. Alternatively you can differentiate the smoothed version which will give you a trend measurement and then look for where that is positive or negative (essentially the same thing from a different angle). But that means you are applying a preconception as to what is 'noise' and what is 'real data' which obviously affects the answer you get.
|
|
|
|
|
BobJanova wrote: I'm going to guess this is to do with financial markets.
you guessed right
BobJanova wrote: I've been there myself, and I'll warn you, those trends are not nearly so real as the eye makes them look!
Can you tell me more about it? why you think that? Do you know any good knowledge source about that topic?
BobJanova wrote: The simplest approach is to smooth out the 'noise' (all the little bumps between B and C) by applying a moving average, gaussian smooth or similar to the data, and then look for peaks and troughs in the smoothed signal. Alternatively you can differentiate the smoothed version which will give you a trend measurement and then look for where that is positive or negative (essentially the same thing from a different angle). But that means you are applying a preconception as to what is 'noise' and what is 'real data' which obviously affects the answer you get.
will think about that. What would be a more difficult approach?
thx for the answer, really helpful!
|
|
|
|
|
It might be worth pointing out that nobody in the >200 year history of all markets has ever been able to perform technical/trend analysis and reliably beat the market.
There is a strong proof why this is the case that you should understand in detail first.
http://en.wikipedia.org/wiki/Efficient-market_hypothesis[^]
|
|
|
|
|
Can you tell me more about it? why you think that?
Experience. I (and some partners) started down this road for a while, and we found it extremely difficult to produce an objective measurement that matched what the eye can see. Trends in markets appear to be fractal and (obviously, otherwise there'd be easy free money) non-deterministic. Furthermore, as the other post explains, there are good reasons to believe that markets are self-correcting so that any simple* analysis is by definition useless for predictive purposes.
A more complex approach is to look at features of secondary indicators, for example volatility or trade volume, in addition to the price. Some economists will tell you that while price is not predictable, volatility sometimes is – though whether that helps you predict price (essentially what you're trying to do in order to make money) is less clear!
*: The feedback mechanism that causes markets to be 'efficient' is that, if a clear inefficiency is seen, traders will take advantage of it until it is no longer profitable. Thus a simple analysis in this context is one that a sufficiently large proportion of market participants have access to. Realistically, if an investment bank knows how to do what you are trying, it won't make money.
|
|
|
|
|
i'm not trying to predict future prices only based on past chart patterns (i never thought technical analysis would work). What im trying to do is finding the causes for those patterns. If a message is released exactly at the turning point of a chart and if similar chart/message patterns (i know, this is the second big difficulty: how do you define Text-Similarity?) often occur, i'd guess the probability of a similar message causing a similar chart pattern is high.
|
|
|
|