Kalman Filter
Combine noisy measurements with predictions of how the state changes to get better estimate of real state.
Tracking: inference over time.
Can simplify the problem by assuming linear dynamics and Gaussian noise. An unscented Kalman filter can deal with non-linear state transitions, but still assumes Gaussian noise.
Task: at each time point (and in real-time), re-compute the estimate of position.
Recursive estimation: decompose this into:
- The part that depends on the new observation
- The part that can be computed from previous history
Minimal example - running average:
Where
This would be sensitive to noise/occlusion
Tracking:
Generalized model:
- Assume there are moving objects with underlying state
(e.g. position + velocity) - Assume there are measurements
, some of which are functions of the state - There is a clock: at each tick, the state changes and we get a new observation
- Data association: the measurements taken at tick
tell us something about the object’s state - Prediction: the measurements
tells us something about the object’s state at tick -
- Where
is a random variable representing the probability distribution for the th measurement and is the observed measurement
-
- Correction:
- Once
is obtained, compute
- Once
Simplifying assumptions:
- Only the immediate past matters; that is, only the previous state
-
- Measurements depend only on the current state
- Previous measurements do not affect the current measurement
1D Kalman Filter
Assumes new state can be obtained by multiplying the old state by a constant
In other words:
TODO what is
TODO what is
Once a measurement arrives, this can be corrected:
Note:
Smoothing: if not running the filte in real time, can run the algorithm forwards and backwards and find the mean between the two predictions.
Kalman in Python
g-h filter:
def g_h_filter(data, x0, dx, g, h, dt=1.):
"""
Performs g-h filter on 1 state variable with a fixed g and h.
'data' contains the data to be filtered.
'x0' is the initial value for our state variable
'dx' is the initial change rate for our state variable (assumes linear rate of change)
'g' is the g-h's g scale factor. g * 100% of the estimate comes from the measurement. Should be high for less noisy measurements
'h' is the g-h's h scale factor - larger h means responds quicker to change, but more vulnerable to noise/outliers
"""
x_estimate = x0
results = []
for x_measurement in data:
# prediction step
x_prediction = x_estimate + (dx*dt)
dx = dx
# update step
residual = z - x_prediction # delta between measurement and prediction
# update rate of change using residual.
# h determines how quickly the rate of change changes
dx = dx + h * (residual) / dt
# Update estimate be weighted average of prediction and measurement
# g determines weight given to the measurement
x_estimate = x_prediction + g * residual
Example: system where position is being measured and the object has constant velocity
The distance and velocity can be represented as Gaussian distributions:
Sum of two Gaussians:
Hence, the prediction can be represented as the sum of the distributions of the previous position and predicted velocity.
Product of two Gaussians:
The update step returns the estimated position as the product of the distributions of the new measurement and current estimated position.
Particle Filter
The particle filter allows multiple positions to be predicted, and works with multi-modal and non-Gaussian distributions.
Three probability distributions:
- Prior density: previous state
- Process density: kinematic model - prediction of next state
- Observation density: previous observation
The particle filter processes this into a single probability distribution, the state density
Comparisons:
- Kalman filter: linear transitions and Gaussian distributions. Breaks down if there is too much occlusion/clutter
- Unscented Kalman filter: non-linear systems, but still assumes Gaussian distribution
- Particle filter: predicts multiple states/positions with non-Gaussian distribution. Much slower
Kalman in Python
Algorithm:
- Generate a bunch of particles randomly
- Each has a weight proportional to the probability that it represents the state of the real system
- The use of Monte-Carlo simulation means that particles can be generated which follow any probability distribution, not just the Gaussian
- Predict the next state of the particles based on your predictions of how a real system would behave (e.g. when you send a command to change the state)
- The changes must be noisy to match the real system
- Update the weighting of the particles based on a new measurement or measurements (e.g. multiple objects being tracked)
- e.g. for each measurement, find distance between each particle and the measurement, and update the weight accordingly (e.g. for position, add the difference/residual multiplied by some factor to account for the measurements being noisy). Then, normalize the weights so they sum to one
- Resample, discarding particles that are now classed as highly improbable, and generate more particles by duplicating some of the more likely ones
- The noise added during the predict stage means the duplicate and original will separate
- If only one particle is being tracked, can use the weighted sum to get an estimate of its real position