|
Ooops, did I say C1 continuty ? No, it's only C0. (C1 is achievable by using a more elaborate interpolation scheme inside the stiches, giving you smooth curves.)
|
|
|
|
|
Very good post. I had the same thought (that it was a few points scattered in space). The array certainly appears to be close enough to a grid to use a slightly modified version of normal grid-based contour finding.
|
|
|
|
|
I'm trying to construct a database of all the data types defined by the Win32 API.
I'm using the DIA SDK to read pdb files in order to extract the type information.
My question is the following:
The recursive nature of data type declarations makes it very difficult to flatten them into a traditional row and column structure.
Am I doing it the wrong way? Is there an easier structure I could use for my database?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Hi Richard,
I'd suggest you google tree in database, and read some of the articles, maybe this one: Implementing a Tree Structure with Database[^]
Luc Pattyn [My Articles] Nil Volentibus Arduum
The quality and detail of your question reflects on the effectiveness of the help you are likely to get. Please use <PRE> tags for code snippets, they improve readability. CP Vanity has been updated to V2.3
|
|
|
|
|
Hello, I have a big maze (800x800) and I need to search for a shortest path on it, from one point to another.
I tried wave algorithm but it is very-very slow. I have to wait for several minutes.
Are there path finding algorithms that can accompish such a task relatively quickly?
|
|
|
|
|
Depending on the type of maze, you can just stick to the right wall the entire way and you will eventually get out of the maze, though it won't be the shortest path. Wouldn't work if the destination is in an island (e.g., where the ghosts go to in Pac-Man). Though, a properly implemented flood fill (maybe this is the same as your "wave algorithm") should find the exit fairly fast. It certainly shouldn't take minutes (should work in a few milliseconds on a modern computer). Can you post your implementation of the algorithm that is too slow?
|
|
|
|
|
finding a path is easy and can be handled quickly, finding the or a shortest path is quite difficult, as you have to consider a lot of possibilities, assuming a non-trivial maze. You have to choose your algorithm carefully, and then implement it even more carefully. And what is the information you have: the full map? or are you supposed to discover the maze while travelling?
Luc Pattyn [My Articles] Nil Volentibus Arduum
The quality and detail of your question reflects on the effectiveness of the help you are likely to get. Please use <PRE> tags for code snippets, they improve readability. CP Vanity has been updated to V2.3
|
|
|
|
|
I have lines on the image control that are like "walls" and I need to bypass them. Also I have two points, that I need to connect.
|
|
|
|
|
If the map is known beforehand, I think there are basically three ways:
1. start travelling in one direction, and implement backtracking (e.g. the righthand-on-wall approach). This is simple but may be slow.
2. perform what looks like a flood fill, i.e. start in all directions and visit everything for sure (look at A* algorithm).
3. turn the map into a graph and analyze the graph (sorry, no reference handy).
I'm not sure what is best when the goal is "find a shortest path with minimal effort". I'm not sure any of the above is always the best pick anyway.
Luc Pattyn [My Articles] Nil Volentibus Arduum
The quality and detail of your question reflects on the effectiveness of the help you are likely to get. Please use <PRE> tags for code snippets, they improve readability. CP Vanity has been updated to V2.3
modified on Friday, June 10, 2011 8:50 PM
|
|
|
|
|
Is this how your "wave algorithm" works?
|
|
|
|
|
Probably the best way is to convert the maze to a connectivity graph this will get the datasize down. Intersections are nodes and corridor lengths give the weighting.
For this you need some kind of linked list structure. Each node having a pointer to all connected nodes with a weighting for each link.
With that you can use either brute force with a recursive algorithm to take every path (but beware of loops) if it is small or apply something like Dijkstra's algorithm (see wikipedia) to it.
DC
Always remember that you are unique- just like everybody else.
|
|
|
|
|
Sorry for the late response, I was going back over some of old entries in the forum that I had not reviewed yet.
I have a question (or observation as the case may be) about the "wave algorithm" as it applies to a maze.
If you start at either point and flood fill, you may go down many unproductive paths (in paralel - the wave front expands equally from all points). It takes time to fill all the dead ends. Eventually the other point will be found.
What about starting the flood fill at both points? You may fill some dead ends from both points, but when you finally collide with the other wave front, it is "Game over!", stop filling the dead ends. OBTW, the flooding must be with different colors from each point to be able to correctly identify looping paths from either end - you need to find the opposite wave color to claim a complete path.
To identify the actual path from the first point to the second point, save (in a solution array that has the same size as the array) the x-y coordinates (or point ID of some kind) of the originating point that was being processed when this new point was flooded, into the array position for the new point. Do this for points starting from the first point (the first color). For the points starting from the second point (the second color), save the new point ID in the current point array position so the path is consistent. To walk the path, start at the second point in the solution array and backtrack to the first point. If you are processing blocks of points in a maze (i.e., 8x8), then you only need a solution array big enough for the number of blocks, not big enough for the number of pixels.
Dave.
|
|
|
|
|
Sorry for the double post.
What was I thinking!
This will not work:
To identify the actual path from the first point to the second point, save (in a solution array that has the same size as the array) the x-y coordinates (or point ID of some kind) of the originating point that was being processed when this new point was flooded, into the array position for the new point. Do this for points starting from the first point (the first color). For the points starting from the second point (the second color), save the new point ID in the current point array position so the path is consistent. To walk the path, start at the second point in the solution array and backtrack to the first point. If you are processing blocks of points in a maze (i.e., 8x8), then you only need a solution array big enough for the number of blocks, not big enough for the number of pixels.
What it should read is:
To identify the actual path from the first point to the second point, save (in a solution array that has the same size as the array) the x-y coordinates (or point ID of some kind) of the current point that was being processed when this new point was flooded, into the array position for the new point. Do this for all flooded points starting from either direction. When the collision occurs between the two wave fronts, you have two adjacent positions marked with different flood fill colors. For each of these positions, backtrack to the originating point and mark the positions with a shortest path color, then scan the entire array and mark all positions that are in either flood fill color with the normal color of the hallway. You will be left with the walls, the halls, the start and end points, and the shortest path between the points. If you are processing blocks of points in a maze (i.e., 8x8 blocks in a 800x800 maze), then you only need a solution array big enough for the number of blocks, not big enough for the number of pixels.
Dave.
|
|
|
|
|
The default answer to a question like this is A*. However, David is right in that you probably don't genuinely have an 800x800 problem space and you should condense your image to a connectivity graph first and use Dijkstra, or at least reduce the resolution of the computational grid (for example if the maze is of 8x8px squares you can reduce the path space to 100x100).
|
|
|
|
|
I tried wave algorithm but it is very-very slow. I have to wait for several minutes.
Given the speed of today's computers and a relatively small problem space, that could mean only one thing: your implementation is not memoizing[^]. With only 640K points to visit, a program that remembers intermediate solutions as it discovers them should finish in under a few milliseconds. This is because a properly coded (i.e. with memoization) wave algorithm visits each point exactly once.
P.S. It would be a lot easier to pinpoint the problem if you post your code.
|
|
|
|
|
Store your maze as a 2D image (I bet it already is). Paint the walls in black, the starting pixel in red, the ending pixel in green and all others in white.
Keep an "active list" with the (coordinates of) pixels that can be reached from the starting point in at most N steps.
---- Initialize the active list with the ending pixel (N=0).
---- For increasing N, scan the active list:
-------- For every pixel in the list:
------------ Consider its white neighbors, if any
------------ Paint them in navy, ecru, wheat or salmon, depending on the neighbor-to-pixel direction
------------ Replace the pixel by its neighbors in the list
---- Stop as soon as you meet the red pixel.
The pixel colors will allow you to trace the path.
Properly implemented with arrays using a compiled language, this will run in about 100 ms or less for a 800x800 image.
|
|
|
|
|
I have a stream of data coming from a respiration belt and I need a good, robust algorithm to help me identify the peaks and troughs as the data is coming in (i.e. live data, not later analysis). The data is roughly sinusoidal with a period of a maybe 2 - 4 seconds. The data is polled about once every 100 ms. The data is noisy, but it's not too bad. We sometimes see sudden spikes if the person wearing the belt fidgets around, but if they are sitting still, it's fairly clean.
Obviously you can't identify a peak or a trough until after it's happened, but I'm trying to find the best and fastest way to identify them so we can use them as a trigger for something else. I understand that there are trade-offs involved.
I tried calculating an instantaneous derivative by comparing the change in signal between the current point and the second to last point and then looking for when that value falls within some threshold. That kinda works, but is a bit touchy. I tried to improve it by checking that when a point falls within the threshold it also has the opposite sign to the last identified peak/trough to try and eliminate having multiple points right around the peak identified as the peak. That helps a little. I also tried introducing, basically a filter, such that when a peak (or trough) is identified, I stop looking for another one for some period of time (say 1 second).
Does anybody have any better ideas?
|
|
|
|
|
Wjousts wrote: I stop looking for another one for some period of time (say 1 second).
Rather than that, you could wait for the slope (aka, instantaneous derivative) to reach a high value before allowing another low value to be detected. You can call this wait mode.
Wjousts wrote: I tried calculating an instantaneous derivative by comparing the change in signal between the current point and the second to last point and then looking for when that value falls within some threshold.
I'm not sure what you mean by "second to last point", but I'd say rather than looking for as instantaneous of a derivative, you look for an average slope of a few points. For example, take 4 data points to calculate a slope. Take a few of those in a row to ensure the correct pattern is being followed. So, if the current slope is less than the previous slope and the previous slope is less than the slope previous to that, then you can be reasonably certain that slope is actually decreasing. Basically, you want to be sure the data is trending as you expect it to be. When it falls below a threshold, then your minimum has been reached. Also, make sure to check if the slope has passed the minumum (e.g., with a negative slope followed by a positive one). Once that happens, you can enter wait mode before detecting a low slope again.
Wjousts wrote: Does anybody have any better ideas?
One more idea. Log some real data and save that so you can perform multiple tests on it. Perhaps start by graphing it out. Look for deviations from the expected pattern so you can account for them (e.g., the spikes you mentioned). Keep running your algorithm through that sample data until you have it working.
|
|
|
|
|
AspDotNetDev wrote: I'm not sure what you mean by "second to last point"
What I'm trying to do is basically this[^].
So, given a new point, the nth point, I can calculate the gradient at the previous point, n-1, but dividing the difference between the value of n-2 (the second to last point) and n with the difference in time between n-2 and n. I.e. I get an approximation of dV/dt. According to Wikipedia, the three point approximation is somehow better than the two point version?
Actually looking at this again made me realize that I've been attributing the gradient to point n, when in fact it belongs to n-1.
|
|
|
|
|
as it is more or less periodic, consider a Fourier analysis (google FFT). Keep the most recent 2^n samples, watch for the fundamental frequency and especially its phase.
Warning: if the harmonics are sufficiently large, the peaks and troughs will not coincide with the fundamental's phase, but then, which is it you actually want?
Luc Pattyn [My Articles] Nil Volentibus Arduum
The quality and detail of your question reflects on the effectiveness of the help you are likely to get. Please use <PRE> tags for code snippets, they improve readability. CP Vanity has been updated to V2.3
|
|
|
|
|
I've seen FFT before, but I didn't think it was useful for what I'm trying to do. I always thought it was more useful for post hoc analysis of data. I need to decide, as close as possible to when the event happens, when I hit a peak (and/or trough). Is there a way to use FFT for that? If I took a window of a few points (maybe 5 - 10), can I do something with FFT to tell me if there is a peak or trough in there?
|
|
|
|
|
If your signal is rather sinusoidal, I would definitely go for Fourier analysis. Take a sliding window, say 64 samples (that is 6.4 seconds, anywhere between 1 and 3 periods I guess); I would not go for a period shorter than 1 period. Calculate FFT, note the base frequency and phase (and maybe check the harmonics are small indeed). After 100 msec, drop the oldest sample, add the new one, and repeat.
If your signal is not sinusoidal at all, and you need the real peak/trough, then the above would not be good enough; I'm not sure anything else would under those conditions though.
Do you have some graphs to show us? And how accurate are the time points? is the sample period close to 100 msec, or is there a lot of jitter (say 100 msec plus/minus 20 msec). More jitter would cause very reduced accuracy. If there is significant jitter and the actual time is known, there may be ways to take advantage of that too.
Luc Pattyn [My Articles] Nil Volentibus Arduum
The quality and detail of your question reflects on the effectiveness of the help you are likely to get. Please use <PRE> tags for code snippets, they improve readability. CP Vanity has been updated to V2.3
|
|
|
|
|
Here's a graph showing a slice of some fairly clean data (look at the purple line - the blue line is the three point derivative, the triangles are one attempt at peak picking which actually doesn't look too bad ):
http://i55.tinypic.com/2uhwiyv.png[^]
I'd post the source data, but I don't have a good place to put it (suggestions are welcome).
The time points seem to be fairly accurately spaced at about 109 ms, but it relies on the accuracy of comparing DateTime objects in .NET, so it's plus or minus whatever that resolution is.
|
|
|
|
|
Ah, it is good to see some actual data. What isn't clear is this:
- what is the horizontal scale, what goes from 6 to 34? is this seconds?
- did you fit a curve? or is it just consecutive input points that got connected using straight lines?
Either way, it seems quite well behaved. The frequency anomaly in [16, 18] and the level dropping in [6, 18] would not be dealt with well when using Fourier analysis.
Now what is the reaction time you need? how many msec are you willing to wait before the latest peak/trough gets signaled?
(see another part of this thread). If a 300 msec delay is no problem, I would definitely go for a convolution filter.
Luc Pattyn [My Articles] Nil Volentibus Arduum
The quality and detail of your question reflects on the effectiveness of the help you are likely to get. Please use <PRE> tags for code snippets, they improve readability. CP Vanity has been updated to V2.3
|
|
|
|
|
Sorry, I was too lazy to label the axes. I used to beat up on students over that when I was in grad school.
Yes the x-axis is time in seconds. I cut the first 6 seconds off because it was a bit crappy (for some reason the signal was saturated for a few seconds at the beginning for reasons unknown). The vertical scale is arbitrary. It's proportional to voltage after it's passed through an ADC to give me a value between 0 - 4096.
I think 300 ms would probably be acceptable (but much more would probably be pushing it). Basically I'd like to do something during exhalation so that it'll be ready by the next inhalation. I played with the convolution filter a little and it does help remove some of the smaller shoulders or spurious peaks, so I think I might use it in combination with some other ideas and see what I end up with. If it helps, I think false positives (i.e. wrongly IDing a peak when it isn't one) is probably worse than false negatives (i.e. missing a real peak) so I think I might fiddle with it to see if there is anywhere I can trade off those objectives.
Edit: forgot to add, no the line isn't fitted, it's just the points connected up by Excel.
|
|
|
|
|