Kalman Filter
Linear Gaussian Systems
The Kalman filter implements belief computational for continuous state. It is not applicable to discrete or hybrid state spaces. The filter represents belief by moments parameterization. At time , the belief is represented by the mean and the covariance . Posteriors are Gaussian if the following three properties hold in addition to Markov assumption.
1. Linear Transition
The state transition probability must be a linear function in its arguments with added Gaussian noises.
The state and control are vectors. In our notation, they are vertical vectors.
Here and are matrices. is of size n by n and is of size n by m. The dimension of state vector is n and the dimension of control vector is m. The random variable has the same dimension as the state vector. Its mean is zero and its covariance will be denoted as . It is there to modifies the uncertainty introduced by the state transition. The mean of the posterior state is given by equation 1 and covariance by .
2. Linear Measurement
The measurement probability must also be linear with added Gaussian noise.
Here is a matrix of size k by n where k is the dimension of the measurement vector. The describes the measurement noise. The distribution of noise is a multivariate Gaussian with zero mean and covariance of .
3. Normal Belief
The initial belief must be normally distributed with mean and covariance .
Kalman Filter Algorithm
Given arguments , , , and , we have the following update rules.
The predicted belief and are calculated to represent the belief , one time step later, but before incorporating the measurement .
Before we perform the measurement update, we need to compute Kalman gain from equation 3b, which specifies the degree to which the measurement is incorporated into the new state estimate. Then we use the gain to get the new state.
The key concept here is innovation, which is the difference between the actual measurement and expected measurement, denoted by .
Code Example
For simplicity sake, I will omit the time dependence for transformation matrices. Let's define three matrices A
, B
, and C
using numpy
. Note that A
is the state transition model or function, B
is the control input model, and C
is the observation model.
A = np.array([[1.0, 1.0], [0.0, 1.0]])
B = np.array([[1.0, 0.0], [0.0, 1.0]])
C = np.array([[1.0, 0.0]])
If we were to express them in matrix form, they would look like the following.
Suppose our robot is at coordinate initially and we don't apply any external control to it. Let's denote x
to be , x_cov
to be , u
to be , and u_cov
to be .
x = np.array([[0.0], [0.0]])
x_cov = np.array([[1000.0, 0.0], [0.0, 1000.0]])
u = np.array([[0.0], [0.0]])
u_cov = np.array([[0.0, 0.0], [0.0, 0.0]])
Let Z
to be and z_cov
to be .
Z = [np.array([[1]]), np.array([[2]]), np.array([[3]])
z_cov = np.array([[1]])
Now we can put everything together and construct a Kalman filter algorithm.
def predict_new_belief(x, x_cov, u, u_cov):
x = dot(A, x) + dot(B, u)
x_cov = dot(dot(A, x_cov), A.T) + u_cov
return x, x_cov
def incorporate_measurement(x, x_cov, z, z_cov):
S = dot(dot(C, x_cov), C.T) + z_cov
K = dot(dot(x_cov, C.T), linalg.inv(S))
x = x + dot(K, z - C.dot(x))
x_cov = dot(identity(2) - dot(K, C), x_cov)
return x, x_cov
for i in range(len(Z)):
x, x_cov = predict_new_belief(x, x_cov, u, u_cov)
x, x_cov = incorporate_measurement(x, x_cov, Z[i], z_cov)
Last updated
Was this helpful?