Probabilistic Robotics
  • Introduction
  • Basics
    • Recursive State Estimation
      • Basic Concepts in Probability
      • Robot Environment Interaction
      • Bayes Filters
    • Gaussian Filters
      • Kalman Filter
      • Extended Kalman Filter
    • Nonparametric Filters
      • Histogram Filter
      • Particle Filter
    • Robot Perception
      • Beam Models of Range Finders
      • Likelihood Fields for Range Finders
  • Localization
    • Taxonomy of Localization Problems
    • Grid and Monte Carlo
      • Monte Carlo Localization
    • Markov and Gaussian
  • Projects
    • Mislocalization Heatmap
Powered by GitBook
On this page
  • Discrete Bayes Filter
  • Histogram Filter
  • Decomposition Techniques

Was this helpful?

  1. Basics
  2. Nonparametric Filters

Histogram Filter

PreviousNonparametric FiltersNextParticle Filter

Last updated 5 years ago

Was this helpful?

Histogram filters decompose the state space into finitely many regions and represent the cumulative posterior for each region by a single probability value. When applied to finite spaces, they are called discrete Bayes filters; when applied to continuous spaces, they are known as histogram filters.

Discrete Bayes Filter

We use discrete Bayes filter when the problem has finite state spaces, where random variable XtX_tXt​ can take on finitely many values, like a door is open or closed. Let the variables xix_ixi​ and xkx_kxk​ denote individual states, of which there may only be finitely many. The belief at time ttt is an assignment of a probability to each state xkx_kxk​, denoted pk,tp_{k, t}pk,t​.

for all k do:  p‾k,t=∑ip(Xt=xk∣ut,Xt−1=xi)pi,t−1pk,t=η  p(zt∣Xt=xk)p‾k,t  endfor and return pk,t(1)\tag{1} \text{for all $k$ do:}\\\;\\ \overline{p}_{k, t} = \sum_i p(X_t = x_k \mid u_t, X_{t-1} = x_i) p_{i, t-1} \\ p_{k, t} = \eta\; p(z_t \mid X_t = x_k)\overline{p}_{k, t} \\\;\\ \text{endfor and return $p_{k,t}$} for all k do:p​k,t​=i∑​p(Xt​=xk​∣ut​,Xt−1​=xi​)pi,t−1​pk,t​=ηp(zt​∣Xt​=xk​)p​k,t​endfor and return pk,t​(1)

The input to the algorithm is a discrete probability distribution pk,tp_{k, t}pk,t​, along with the most recent control utu_tut​ and measurement ztz_tzt​. The first step is to calculate the prediction, the belief for the new state based on the control alone. The second step is to incorporate the measurement and update the new belief. This is exactly identical to the Bayes filter except that the integral has been replaced by a discrete sum.

Histogram Filter

We want to use discrete Bayes filters as an approximate inference tool for continuous state spaces, this brings us to the histogram filter. Histogram filter decomposes a continuous state space into finitely many bins or regions.

dom(Xt)=x1,t∪x2,t∪...∪xK,t\text{dom}(X_t) = x_{1, t} \cup x_{2, t} \cup ... \cup x_{K, t}dom(Xt​)=x1,t​∪x2,t​∪...∪xK,t​

Here XtX_tXt​ is the familiar random variable describing the state of the robot at time ttt. The function dom(Xt)\text{dom}(X_t)dom(Xt​) denotes the state space, which is the universe of possible values that XtX_tXt​ might assume. A straightforward decomposition of a continuous state space is a multi-dimensional grid, where each xk,tx_{k,t}xk,t​ is a grid cell.

If the state is truly discrete, the conditional probabilities are well-defined, and the algorithm can be implemented as equation (1). In continuous state space, one is usually given the densities p(xt∣ut,xt−1)p(x_t \mid u_t, x_{t-1})p(xt​∣ut​,xt−1​) and p(zt∣xt)p(z_t \mid x_t)p(zt​∣xt​) which are defined for individual states and not for the regions in the state space. We need to perform an approximation for the region by taking the mean state in xk,tx_{k, t}xk,t​.

x^k,t=1∣xk,t∣∫xk,txt  dt\hat{x}_{k, t} = \frac{1}{\lvert x_{k,t}\rvert}\int_{x_{k,t}} x_t \;dtx^k,t​=∣xk,t​∣1​∫xk,t​​xt​dt

Now we have it.

Decomposition Techniques

Decomposition techniques of continuous state spaces come in two basic flavors, static and dynamic. State technique rely on a fixed decomposition that is chosen in advance. Dynamic technique adapt the decomposition to the specific shape of the posterior distribution. State techniques are usually easier to implement, but they can be wasteful with regards to computational resource.

for all k do:  p‾k,t=∑iη  ∣xk,t∣  p(x^k,t∣ut,x^i,t−1)pi,t−1pk,t=η  p(zt∣x^i,t−1)p‾k,t  endfor and return pk,t(1)\tag{1} \text{for all $k$ do:}\\\;\\ \overline{p}_{k, t} = \sum_i \eta \; \lvert x_{k,t}\rvert \; p(\hat{x}_{k, t} \mid u_t, \hat{x}_{i, t-1}) p_{i, t-1} \\ p_{k, t} = \eta\; p(z_t \mid \hat{x}_{i, t-1})\overline{p}_{k, t} \\\;\\ \text{endfor and return $p_{k,t}$} for all k do:p​k,t​=i∑​η∣xk,t​∣p(x^k,t​∣ut​,x^i,t−1​)pi,t−1​pk,t​=ηp(zt​∣x^i,t−1​)p​k,t​endfor and return pk,t​(1)