Bayes Filters

Algorithm

The general Bayesian filter algorithm can be summarized as follows.

def new_belief(x[t-1], u[t], z[t]):
    for x_t in X:
        prediction = control_update(u[t], x[t-1])
        bel[x[t]] = measurement_update(z[t], x[t], prediction)

    return bel[x[t]]

The control update is calculated by the following equation.

\text{control_update($u_{t}$, $x_{t-1}$)} = \overline{bel}(x_{t}) = \int p(x_{t} \mid u_{t}, x_{t-1}) bel(x_{t-1}) dx_{t-1}

The measurement update is calculated by the following equation.

\text{measurement_update($z_{t}$, $x_{t}$)} = bel(x_{t}) = \frac{p(z_{t} \mid x_{t})\;\overline{bel}(x_{t})} {\int p(z_{t} \mid x^\prime_{t})\; \overline{bel}(x^\prime_{t}) dx^\prime_{t}}

Numerical Example

Belief

Let's say we have a robot and it can estimate the state of a door using its camera. The state space for a door is binary, i.e. it is either open or closed. We begin with a uniform belief function, i.e. we assign equal probability to guess whether the door is open or closed.

Measurement

Robot's sensor is noisy, we need to characterize it by some conditional probabilities.

The robot is relatively reliable in detecting a closed door which makes a lot of sense. It is a lot easier to develop an algorithm to detect that the door is closed. On the other hand, we have a 40% chance to make a wrong detection when the door is actually open.

Control

Robot's action is also probabilistic, we cannot make the assumption that when the robot tries to open a door, the door will always end up in open state. We have to continue to use conditional probabilities to describe the action outcome.

Another possibility is that the robot does nothing and performs null control.

Calculate New Belief

Our state space is discrete, either open or closed, either push or null. We can use a summation instead of integral to calculate our control update. Let's say the robot is performing a null action.

That is equivalent to

Let's first calculate our normalizer factor.

  • Robot sensed the door is opened and it is actually open

  • Robot sensed the door is opened but it is actually closed

Then our new belief will become

Now for the second state, if we decide to apply $$u{2} = \text{push}$ and $z{2} = \text{open}$$, then we get the following control updates.

Again measurement updates again.

It seems impressive that we have a 98.3% of confidence that the door is opened after the robot sensed that the door is opened twice and performed one push control action. However, if this is a mission critical scenario, 1.7% chance of screwing up is still very significant.

Mathematical Derivation

Then we can simplify the the equation 1 as follows.

Hence the new belief is

The Markov Assumption

  • Unmodeled dynamics in the environment are not included in the state, e.g. moving people

  • Inaccuracies in the probabilistic models and distributions

  • Approximation errors

  • Software variables in the robot control software

There are many algorithms and techniques derived from the Bayes filter. Each of them relies on a different assumptions regarding the measurement, state transition probabilities, and the initial belief. The assumptions give rise to different computational characteristics.

  • Computational efficiency

  • Accuracy of approximation

  • Ease of implementation

Last updated