Probabilistic Robotics
  • Introduction
  • Basics
    • Recursive State Estimation
      • Basic Concepts in Probability
      • Robot Environment Interaction
      • Bayes Filters
    • Gaussian Filters
      • Kalman Filter
      • Extended Kalman Filter
    • Nonparametric Filters
      • Histogram Filter
      • Particle Filter
    • Robot Perception
      • Beam Models of Range Finders
      • Likelihood Fields for Range Finders
  • Localization
    • Taxonomy of Localization Problems
    • Grid and Monte Carlo
      • Monte Carlo Localization
    • Markov and Gaussian
  • Projects
    • Mislocalization Heatmap
Powered by GitBook
On this page

Was this helpful?

  1. Basics

Robot Perception

Environment measurement models comprise the second domain-specific model in probabilistic robotics, next to motion models. Measurement models describe the formation process by which sensor measurements are generated in the physical world. They explicitly model the noise in sensor measurements. Such models account for the inherent uncertainty in robot's sensors.

Formally, the measurement model is defined as a conditional probability distribution p(zt,∣xt,m)p(z_t, \mid x_t, m)p(zt​,∣xt​,m) where xtx_txt​ is the robot pose, ztz_tzt​ is the measurement at time ttt, and mmm is the map of the environment.

Probabilistic robotics accommodates inaccuracies of sensor models in the stochastic aspects. By modeling the measurement process as a conditional probability density, p(zt∣xt)p(z_t \mid x_t)p(zt​∣xt​) instead of a deterministic function zt=f(xt)z_t = f(x_t)zt​=f(xt​), the uncertainty in the sensor model can be accommodated in the non-deterministic aspects of the model.

Many sensors generate more than one numerical measurement value when queried. For example, cameras generate entire arrays of values like brightness, saturation, color and etc... We will denote the number of such measurement values within a measurement ztz_tzt​ by KKK.

zt={zt1,zt2,...ztK}z_t = \{ z_t^1, z_t^2, ... z_t^K\}zt​={zt1​,zt2​,...ztK​}

We will use ztkz_t^kztk​ to refer to an individual measurement. The probability p(zt∣xt,m)p(z_t \mid x_t, m)p(zt​∣xt​,m) is obtained as follows.

p(zt∣xt,m)=∏k=1Kp(ztk∣xt,m)p(z_t \mid x_t, m) = \prod_{k =1}^{K} p(z_t^k \mid x_t, m)p(zt​∣xt​,m)=k=1∏K​p(ztk​∣xt​,m)

Technically, this amounts to an independence assumption between the noise in each individual measurement beam, just as our Markov assumption assumes independent noise over time. But in reality, this is not true, if a single measurement fails, it is likely that multiple other measurements will also fail due to some underlying hardware issue.

To express the the process of generating measurements, we need to specify the environment in which a measurement is generated. A map of the environment is a list of objects in the environment and their locations. Formally, a map is a list of objects in the environment with their properties.

m={m1,m2,...,mN}m = \{ m_1, m_2, ..., m_N \}m={m1​,m2​,...,mN​}

NNN is the total number of objects in the environment, and each mnm_nmn​ specifies a property. Maps are usually indexed in one of the two ways, known as feature based and location based. In feature based maps, nnn is a feature index. The value of mnm_nmn​ contains the Cartesian location of the feature. In location based maps, the index nnn corresponds to a specific location. In planar maps, it is common to denote a map element by mx,ym_{x, y}mx,y​ instead of mnm_nmn​.

PreviousParticle FilterNextBeam Models of Range Finders

Last updated 5 years ago

Was this helpful?