|
Try a running average of the last X samples and subtract it from the previous average. When the sign changes positive or negative, it's a peak or trough. You can change the value of X to filter out the noise from the patient moving.
|
|
|
|
|
I've done this with heart rate variability (HRV) and it's rather simple. Determine the frequency range of your useful data and setup a bandpass IIR or FIR filter to filter out all unwanted noise. Use matlab or similar tool to determine the coefficients of the filter. IIR filters are very easy to implement in code as they are just some floating point multiply ops. After your data is filtered, peaks can be easily identified with a simple rule e.g. if(sample[t] < sample[t-1]) . All is in the design of the filter (compromise between delay vs and bandwidth).
modified on Thursday, June 2, 2011 3:38 AM
|
|
|
|
|
I found the following article useful when I was trying to find peaks & troughs in some data:-
http://billauer.co.il/peakdet.html[^]
Looking at your graph the only slight hiccup would be at 31s, but you can set the delta value to an appropriate value to overcome this I think.
|
|
|
|
|
Hi,
One good way to remove spikes is to use a median filter. Keep a window of the data, sort the values in the window, and use the one in the center of the range.
Don't know if that is appropriate for a real time application, but we use it for seismic data post-processing all the time.
Is that sleep-study data? I had to have one a few weeks ago, I have sleep apnea!
Regards,
Tom
|
|
|
|
|
I'm sure there's a relatively-straightforward way to do this, but my research so far has been a bit overwhelming... Heavy statistics and calculus have never been one of my strengths...
Basically, I need to calculate a best-fit curve on a scatter graph... Points are not evenly spaced on either axis, and there's no cost function to work with (These are derivations of stock/bond price histories)... Just raw (X,Y) values.
I've been trying to work out an algorithm that'll start with a second-degree polynomial function, iterate through and minimize the difference, and then jump to third-degree if it's not within tolerance (And so on, up to a maximum degree). Juggling the data around is something I can do blindfolded, but there's one thing I can't figure out...
Given the polynomial coefficients ((a,b,c,d) -> y = a + bx + cx^2 + dx^3) and a function to determine the total difference across the curve (Or the total square difference, or whatever's needed), how do I adjust the coefficients for the next pass? I mean, how do I choose new values?
I was originally going to try to fix the highest-degree coefficient first, then drill down to the zeroth-degree one (I have a root-solving algorithm that I use for yield calculations - Was going to adapt that), but then I looked at it again with the proper amount of caffeine, and realized that solving one coefficient at a time wasn't going to work
SOLVED: See my reply below... Linear/Polynomial Regression
|
|
|
|
|
Not sure if this will help but, have you tried a spline interpolation algorithm?
|
|
|
|
|
Kind of the opposite of where I'm going here... The goal isn't to pass through every point, but to give more of a trend-line.
Generally going to be working with a few hundred data points, but only looking for maybe a second or third-degree function in most cases.
<small><i>Proud to have finally moved to the A-Ark. <a href="http://en.wikipedia.org/wiki/Places_in_The_Hitchhiker%27s_Guide_to_the_Galaxy#Golgafrincham">Which one are you in?</a></i><br>Author of the <a href="http://www.serpentooth.com">Guardians Saga</a> (Sci-Fi/Fantasy novels)</br></small>
|
|
|
|
|
Ok, now I understand it better. Then maybe bezier curves would be a better choice.
|
|
|
|
|
Hi Ian,
I don't think there are any closed formula's for what you are trying. What I would recommend is this, inspired by Newton-Raphson:
1. have some values a1,b1,c1,d1 and the corresponding value of your error function e1.
2. slightly change a1 (say by 1%) and recalculate e; find the "a-sensitivity" as (change in e)/(change in a)
3. same for b1, then c1, then d1
4. now choose new values a2,b2,c2,d2 based on the four sensitivities, maybe like so: a2=a1+coef*e/"a-sensitivity" where maybe coef is one.
then iterate until convergence is reached (i.e. sufficiently accurate); warning: this might diverge right away, if so try smaller values of coef (you could halve it in each consecutive try).
FWIW: with more-or-less random data, you'll never find a good fit with a single polynomial. What one normally does is a local approximation, which applies in a small region only. That is similar to a font being drawn as a sequence of Bezier curves; it pays of to have several simple Bezier curves, rather than trying to have a single very complex one (which in general will not be good enough).
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
Hmm, didn't think of the sensitivity thing... Giving that a shot now.
FWIW: with more-or-less random data, you'll never find a good fit with a single polynomial. What one normally does is a local approximation, which applies in a small region only. That is similar to a font being drawn as a sequence of Bezier curves; it pays of to have several simple Bezier curves, rather than trying to have a single very complex one (which in general will not be good enough).
It's not actually random data... It's graphing related values, so it should roughly cluster around y=x unless there's another factor (Which is what I'm trying to illustrate).
Proud to have finally moved to the A-Ark. Which one are you in? Author of the Guardians Saga (Sci-Fi/Fantasy novels)
|
|
|
|
|
If you suspect some periodicity, I'd suggest you first subtract x, so according to what you said the average becomes zero, and the first-order approximation would be the x-axis itself. Then consider a Fourier analysis.
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
Not a cyclic function, really... Bond valuations are a bit odd, so hard to explain... But dropping it to the X-axis does kinda help the initial approximation.
Still needs a lot more work (Got it to approximate the first degree, and working on expanding from there), but the sensitivity thing is working like a charm. Thanks
<small><i>Proud to have finally moved to the A-Ark. <a href="http://en.wikipedia.org/wiki/Places_in_The_Hitchhiker%27s_Guide_to_the_Galaxy#Golgafrincham">Which one are you in?</a></i><br>Author of the <a href="http://www.serpentooth.com">Guardians Saga</a> (Sci-Fi/Fantasy novels)</br></small>
|
|
|
|
|
you're welcome.
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
Assuming that standard numerical methods such as simplex aren't going to work here, and given that you do alreaduy have a fitness function available and a few coefficients to tweak, I'm wondering if perhaps a genetic algorithm or something like simulated annealing would work here?
|
|
|
|
|
Thanks for the input, guys... Got me thinking along the right lines, and that got me to the right google search...
I was searching for things like "curve fitting" and "best fit algorithm"... Turned out the 'proper' term was "Linear Regression"... And that got me here:
An Algorithm for Weighted Linear Regression[^]
Figures, the solution would be right here on CP
|
|
|
|
|
I'm surprised with all those answers, no one gave you the right approach: Polynomial Regression: http://en.wikipedia.org/wiki/Polynomial_regression[^]
It will give you a best-fit quadratic or cubic equation (or even higher orders if you want). The Wikipedia link defines it, but the explanation there isn't very intuitive. The basic idea is you get an equation for the error terms, take the derivative, and set it to zero. Solving that gives you the equation with the minimum error.
|
|
|
|
|
Well, to be fair, the answers got me thinking along the right lines, and led me to the right terms, which brought me right back one of Walt's articles on CP... See my above post
Marked the original as solved
|
|
|
|
|
But if your data set naturally has a curve, a linear model will be inaccurate.
|
|
|
|
|
Well it's multiple linear regression... It lets me specify the degree.
Using Walt's library, it's looking perfect... Using third-degree (x^3, so a possible S-shape in the curve), and releasing the update next week.
|
|
|
|
|
Hello,
I have an image and I mush find the shortest ways to connects points, and walls must be bypassed.
http://imageshack.us/photo/my-images/863/exampley.png
Can you recommend algorithm(s), that will help to make this program?
|
|
|
|
|
I suggest you start here[^].
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
Look up the maze routing algorithm - it may work for what you're trying to do. Here is an applet[^] that shows you the basic steps of the search (the forward "wave" and the backtracking).
|
|
|
|
|
This is the code of Ackermann Algorithm.
Private Function T(ByVal m As Double, ByVal n As Double) As Double
Dim s1 As New Stack
Dim s2 As New Stack
s1.Clear()
s2.Clear()
If m = 0 Then
GoTo point
Else
Back: While n<>0
s1.Push(m – 1)
n = n - 1
End While
m = m - 1
n = 1
GoTo Point
point: Select Case (m)
Case 0
If s1.Count = 0 Then
Return n + 1
Else
s2.Push(n + 1)
m = s1.Pop
n = s2.Pop
GoTo Point
End If
Case 1
If s1.Count = 0 Then
Return n + 2
Else
s2.Push(n + 2)
m = s1.Pop
n = s2.Pop
GoTo Point
End If
Case 2
If s1.Count = 0 Then
Return (2 * n) + 3
Else
s2.Push((2 * n) + 3)
m = s1.Pop
n = s2.Pop
GoTo Point
End If
Case 3
If s1.Count = 0 Then
Return 5 + (8 * ((2 ^ n) – 1))
Else
s2.Push(5 + (8 * ((2 ^ n) – 1)))
m = s1.Pop
n = s2.Pop
GoTo Point
End If
Case Else
GoTo Back
End Select
End If
End Function
|
|
|
|
|
Thanks for Sharing. Now where do we use it?
♫ 99 little bugs in the code,
99 bugs in the code
We fix a bug, compile it again
101 little bugs in the code ♫
|
|
|
|
|
this is not something I'm going to read as the code is not formatted. What happened to indentation?
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|