|
|
- click to see videos |
| |
Color object tracking: Each particle models the probability for the red color. The particle filter is used to detect and track the red pen. | Template selection: Size, angle and position of a template is modeled by particle. The particle filter is used to choose the subset of templates that are more probable thus reducing matching time. |
Contents
- Introduction
- Particle Filter
- Implementation
- Conclusion
- References
Introduction
|
- NuGet packages: Statistics
- Help: Off-line - Statistics.ParticleFilter - unblock after download!
|
One of the primary computer vision's tasks is object tracking. Object tracking is used in the vast majority of applications such as: video surveillance, car tracking (distance estimation), people detection and tracking, etc. The object trackers usually need some initialization step such as the initial object location, which can be provided manually or automatically by using object detector such as Viola and Jones detector or fast template matching. There are several major problems related to tracking:
- occlusion
- multiple objects
- scale, illumination, appearance change
- difficult and rapid motions
- ...
Although the object tracking problem is present for years, it is still not solved, and there are many object trackers, the ones that are built for special purposes and generic ones.
The Kalman filter assumes linear motion model and Gaussian noise and returns only one hypothesis (e.g. the probable position of a tracked object). If the movements are rapid and unpredictable (e.g. leaf on a tree during windy day), the Kalman filter is likely to fail. The particle filter returns multiple hypotheses (each particle presents one hypothesis) and thus can deal with non-Gaussian noise and support non-linear models. Besides the object tracking where the state is a position vector (x, y), the state can be anything, e.g., shape of the model. This article will explain the main idea behind particle filter and will focus on their practical usage for object tracking along with samples.
Particle Filter - Main Idea
A particle filter is the generic algorithm for a function optimization where the solution search space is searched using particles (sampling). So what does this mean? In our case, each particle incorporates tests whether how it is likely that the object is at the position where the particle is. After the particles have been evaluated, the weights are assigned according to how good the particles are. Then the good particles are multiplied and the bad particles are removed through the re-sampling process. The next particle generation then predicts where the object might be. Then this generation is evaluated, and the cycle repeats.
Opposed to the Kalman filter the particle filter can model non-linear object motion because the motion model should not be written as a state transition matrix like in the Discrete Kalman filter. Moreover, the particle filter is fairly easy to understand, but there is a negative thing: the performance of the filter depends on the particles number where the higher number of particles will lead to a better estimate, but it is more costly. Nevertheless, the particle filter is largely used for generic function optimization including the object tracking.
The figure below shows the two main steps of the particle filter.
|
The cycle of a particle filter. Opposed to Kalman filter, particle filter works with general probability densities. First, the filter predicts the next state from the provided state transition (e.g. motion model), then if applicable, the noisy measurement information is incorporated in the correction phase. The cycle is repeated. |
| |
Distribution representation: Assume we have some probability density (which we do not know, currently) we want to represent (red). We can do that as a set of (weighted) samples. Each blue circle represents a sample taken from the distribution. The circle radius represents a particle's weight. It can be seen that a set of weighted particles can approximate a distribution. | Distribution representation (sample): The number of particles is the key parameter of a particle filter. For example, we would like to represent this mixture of two Gaussians using particles. The Monte Carlo approximation will be used. A good representation will be achieved if enough particles are used. The used samples are unweighted; better representation can be achieved if samples are weighted. |
| |
Particle: A single particle is also known as a sample (see previous figure). The reason for this is due to its role (sample) for the target(posterior) distribution. Each particle contains:
- state estimate - x(t)
- associated weight - w(t)
where particles are indexed by n where N is the total number of particles. | Steps: We start with the previous estimation. The first step is the particle resampling and weight normalization (red). Then we apply state transition (e.g. motion model) to each particle (green). Those two steps are included into the prediction steps. The update step is formed of measurement and weight update. The measurement is taken using the observation model (gold). Finally, the particles' weights are updated using the observation model in order to give a posterior distribution (red). All steps form a single iteration which is repeated. |
Predict
Prediction is the first step which includes particle selection according to their weight resample, the next state transition (e.g. applying motion model) drift and applying noise diffuse.
Resampling |
| |
1a) Resample (general): The first step is the particle resampling. The particles are resampled according to their weights, where the particles which have higher weight have a greater chance to be selected. The same particle can be selected multiple times - repeated sampling. The resampling step can be compared to "natural selection" where the best samples survive. | 1b) Resample (sample): We begin with the weighted particles from a previous iteration. Resampling draws new samples from the previous particle set. Resampling step is needed in order to stop degeneracy. Degeneracy happens when a single sample has an extremely dominant weight. After the particles are resampled the weight distribution is set to uniform. |
|
1c) Resample (degeneracy): Blue dots represent the samples, while red lines represent the distribution they approximate. If we keep to weighing particles according to their weight, weights will accumulate where good particles will continue to get more and more weight, and bad particles are going to diminish. The end result is that the distribution is not going to be well represented anymore. To avoid this resampling step is needed. |
Drift and diffuse |
| |
1a) Drift and diffuse (general): First drift is applied. As each particle have different state after applying drift (here motion model F) predictions will be different. Next, the noise w(t) is applied in order to cover some unlikely hypothesis and to differentiate the particles a little bit more if some particles are the same - resampling step. | 1b) Drift and diffuse (sample): Here drift operations are motion model appliance. Motion model is applied to each particle individually. This motion model is 2D constant velocity motion model. |
Update (Correct)
After the noisy measurement has been obtained, the update (correction) step begins. Each particle is evaluated and each particles' weight is updated according to obtained likelihood.
| |
2a) Update (general): After the noisy measurement is obtained, the observation model (e.g. histogram similarity) is evaluated according to the state estimate of each particle. Observation model models the likelihood that the measurement z(t) would appear assuming the we know the state x(t) of the object. | 2b) Update (sample): The blue circles denote particles which are state estimates propagated by the drift (e.g. motion model x(t)). Now, the state estimates must be turn into observations using the data from the image. The image data in this case is the histogram obtained from a bound box defined by the particle state. This image shows two estimates denoted by blue and orange box. The estimate defined by blue box represents a much better guess as to where the pink girl is. |
| |
2c) Update (sample): After the calculating the histogram distance for each state we can update each particle by the defined p(z|x) update. The particles which have state produces similar histograms with the original are going to have bigger weights, while low weights are going to be obtained if the histogram did not match the expected model. Applying the observation model to each of the particles, we now have an updated distribution! Notice how the high weight particles are centered on the girl, while lower weight particles are on the periphery. | 2d) Update (state estimation): Our final step is to assess the state using updated particles. A maximum likelihood could be taken, but this approach generates noisy output. The most common approaches are median or mean of the particles' states. If discrete labels are used, e.g. if one element of a state is used to label 3 people {1,2,3} median or mean are not applicable. |
Now when you know the basics, it is time for real samples, but first the brief introduction to the implementation.
Implementation
The particle filter is implemented as a number of extension methods for IEnumerable<TParticle>
or IList<TParticle>
where TParticle
implements IParticle
interface. Methods are implemented as extensions defined on particle collection (e.g. particle filter). This kind of implementation provides:
- flexibility
- custom implementation of the specific parts
- extensibility
- short learning curve for the developer
The class diagram below shows the implementation outline:
|
Class diagram. The implementation of the generic particle filter does not introduce any new classes. Instead, it implements a single particle as a lightweight interface and provides extension methods for particle collection (e.g. particle filter) defined in the ParticleFilter class. |
Assume we want to use a constant velocity model and the measurement model is the object's location (just as in figures above). The usage sample for this sample is shown below:
Particle
The particle implements two main methods: drift
and diffuse
. As we do not incorporate any motion model, this method is empty. We diffuse particles by randomizing the current particle position. The code shown below omits some non-important implementation details.
class ColorParticle: IParticle
{
public double Weight { get; set; }
public PointF Position { get; set; }
public void Drift()
{ }
public void Diffuse()
{
}
...
}
Initialization
The most complex step is the initialization which includes particle implementation (drift and diffuse) and initialization. The shown initialization procedure scatters 1000 particles uniformly across an image.
List<ColorParticle> particleFilter = new List<ColorParticle>();
particleFilter.CreateParticles(1000,
ColorParticle.FromArray,
new ISampleableDistribution<double>[]
{
new UniformContinuousDistribution(0, imgSize.Width),
new UniformContinuousDistribution(0, imgSize.Height)
});
Predict
The prediction step consists of only one method which, if there are no measurements, is executed repetitively without correction method. It incorporates resample, drift and diffuse method.
private void predict()
{
particleFilter = particleFilter.Predict();
}
Update (correct)
Update (correction) method updates particles' weights according to their measurements.
private void update()
{
particleFilter.Update(measure);
}
private double measure(ColorParticle p)
{
}
Conclusion
The discrete particle filter is described for object tracking problem and its implementation in C#. Particle filter has many more purposes as it serves as generic optimization problem as it is shown in one of the included samples.
The source and sample code are the part of Accord.NET Extensions Framework, a framework that brings many advanced algorithms primarily for image processing, object detection and tracking, all packed as fluent extensions and simple and intuitive generics, so do not forget to take a peek :).
References
History
- 16 January 2014 - First version released