Probabilistic Robotics
  • Introduction
  • Basics
    • Recursive State Estimation
      • Basic Concepts in Probability
      • Robot Environment Interaction
      • Bayes Filters
    • Gaussian Filters
      • Kalman Filter
      • Extended Kalman Filter
    • Nonparametric Filters
      • Histogram Filter
      • Particle Filter
    • Robot Perception
      • Beam Models of Range Finders
      • Likelihood Fields for Range Finders
  • Localization
    • Taxonomy of Localization Problems
    • Grid and Monte Carlo
      • Monte Carlo Localization
    • Markov and Gaussian
  • Projects
    • Mislocalization Heatmap
Powered by GitBook
On this page
  • Algorithm
  • Details
  • Step One
  • Step Two
  • Step Three
  • Mathematical Derivation

Was this helpful?

  1. Basics
  2. Nonparametric Filters

Particle Filter

The particle filter is just like histogram filter, it approximate the posterior by a finite number of parameters. However, they differ in the way these parameters are generated, and in which they populate the state space.

The key idea of the particle filter is to represent the posterior bel(xt)bel(x_t)bel(xt​)by a set of random state samples drawn from this posterior. Such representation is approximate, but it is non-parametric, and therefore can represent a much broader space of distributions than, for example, Gaussian. Another advantage of the sample based representation is its ability to model nonlinear transformations of random variables.

In particle filter, the samples of a posterior distribution are called particles.

χt=xt1,xt2,...,xtM\chi_t = x_t^1, x_t^2, ..., x_t^Mχt​=xt1​,xt2​,...,xtM​

Each particle xtmx_t^mxtm​ is a concrete instantiation of the state at time ttt. In other words, a particle is a hypothesis as to what the true world state may be at time ttt. Here MMM denotes the number of particles in the particle set χt\chi_tχt​.

Algorithm

initialize χ‾=χ=∅ to be an empty set  for m=1 to M do:sample xtm from  p(xt∣ut,xt−1m)wtm=p(zt∣xtm)χ‾t=χ‾t+⟨xtm,wtm⟩endfor  for m=1 to M do:draw i with probability ∝wtiadd xti to χtendfor  return χt\text{initialize $\overline{\chi} = \chi = \emptyset$ to be an empty set}\\\;\\ \text{for $m=1$ to $M$ do:} \\ \text{sample $x_t^m$ from} \; p(x_t \mid u_t, x_{t-1}^m) \\ w_t^m = p(z_t \mid x_t^m) \\ \overline{\chi}_t = \overline{\chi}_t + \langle x_t^m, w_t^m \rangle \\ \text{endfor}\\\;\\ \text{for $m=1$ to $M$ do:}\\ \text{draw $i$ with probability $\propto w_t^i$} \\ \text{add $x_t^i$ to $\chi_t$}\\ \text{endfor} \\\;\\ \text{return $\chi_t$}initialize χ​=χ=∅ to be an empty setfor m=1 to M do:sample xtm​ fromp(xt​∣ut​,xt−1m​)wtm​=p(zt​∣xtm​)χ​t​=χ​t​+⟨xtm​,wtm​⟩endforfor m=1 to M do:draw i with probability ∝wti​add xti​ to χt​endforreturn χt​

The intuition behind particle filter is to approximate the belief bel(xt)bel(x_t)bel(xt​) by the set of particles χt\chi_tχt​. Ideally, the likelihood for a state hypothesis xtx_txt​ to be included in the particle set shall be proportional to its Bayes filter posterior bel(xt)bel(x_t)bel(xt​).

Just like all other Bayes filter algorithm, the particle filter algorithm constructs the new belief recursively from the previous belief one time step earlier. Since the beliefs are represented by a set of particles, the filter constructs the new particle set recursively from the previous particle set.

Details

Step One

for m=1 to M do:sample xtm from p(xt∣ut,xt−1m)\text{for $m=1$ to $M$ do:}\\\quad \text{sample $x^m_t$ from } p(x_t\mid u_t, x^m_{t-1})for m=1 to M do:sample xtm​ from p(xt​∣ut​,xt−1m​)

We generate a hypothetical state xtmx_t^mxtm​based on the particle from previous time step and the control at current time step.The set of particles obtained from the first for loop is filter's representation of bel‾(xt)\overline{bel}(x_t)bel(xt​).

class Particle:
    def move(self, turn, forward):
        if forward < 0:
            raise ValueError('robot cant move backwards')         
        
        # turn, and add randomness to the turning command
        theta = self.theta + float(turn) + random.gauss(0.0, self.turn_noise)
        theta %= 2 * math.pi
        
        # Move, and add randomness to the motion command
        dist = float(forward) + random.gauss(0.0, self.forward_noise)
        x = self.x + (math.cos(theta) * dist)
        y = self.y + (math.sin(theta) * dist)
        x %= world_size    # cyclic truncate
        y %= world_size
        
        # Create new particle and return it
        p = Particle()
        p.set(x, y, theta)
        p.set_noise(self.forward_noise, self.turn_noise, self.sense_noise)
        return p

Notice that move produces a new particle state, it is equivalent to produce a new state given the control which is turn and forward, and previous state, which is (self.x, self.y, self.theta).

Step Two

wtm=p(zt∣xtm)χ‾t=χ‾t+⟨xtm,wtm⟩endforw_t^m = p(z_t \mid x_t^m) \\ \overline{\chi}_t = \overline{\chi}_t + \langle x_t^m, w_t^m \rangle \\ \text{endfor}wtm​=p(zt​∣xtm​)χ​t​=χ​t​+⟨xtm​,wtm​⟩endfor

The www is known as the importance factor or particle scores. Importance factors are used to incorporate measurements. The importance is the probability of the measurement ztz_tzt​ under the particle xtmx_t^mxtm​. We compute the particle weight (importance) and then add the new particle and its weight to the prediction set, denoted as χ‾t\overline{\chi}_tχ​t​.

class Particle:
    def importance_weight(self, measurement):
        # The likelihood of a measurement gives the important weight.
        prob = 1.0;
        for i in range(len(landmarks)):
            dist = math.sqrt((self.x - landmarks[i][0]) ** 2 + (self.y - landmarks[i][1]) ** 2)
            prob *= gaussian(dist, self.sense_noise, measurement[i])
        return prob

Step Three

for m=1 to M do:draw i with probability ∝wtiadd xti to χtendfor\text{for $m=1$ to $M$ do:}\\ \text{draw $i$ with probability $\propto w_t^i$} \\ \text{add $x_t^i$ to $\chi_t$}\\ \text{endfor}for m=1 to M do:draw i with probability ∝wti​add xti​ to χt​endfor

The real trick occurs here which is the re-sampling portion. The algorithm draws with replacement MMM particles from the temporary set χ‾t\overline{\chi}_tχ​t​. The probability of drawing each particle is given by its importance weight. Re-sampling transforms a particle set into another particle set of the same size. By incorporating the importance weights into the re-sampling process, the distribution of the particles change.

The re-sampling step is a probabilistic implementation of the Darwinian idea of survival of the fittest. It refocuses the particle set to region in state space with high posterior probability. By doing so, it focuses the computational resources of the filter algorithm to regions in the state space where they matter the most.

samples = []
for i in range(N):
    samples.append(Particle())

# update particles
for i in range(N):
    samples[i] = samples[i].move(0.1, 5)
    samples[i].set_noise(0.05, 0.05, 5.0)

# calcualte weights
weights = []
for i in range(N):
    weights.append(samples[i].importance_weight(measurements))

weights = np.array(weights)
weights = weights/np.sum(weights)

resamples = []
for i in range(N):
    resamples.append(np.random.choice(samples, p=weights))

Mathematical Derivation

Think of particles as samples of state sequences. One sequence can be described as follows.

x0:tm=x0m,x1m,...,xtmx_{0:t}^m = x_0^m, x_1^m , ..., x_t^mx0:tm​=x0m​,x1m​,...,xtm​

Particle filter calculates the posterior over all state sequences.

bel(x0:t)=p(x0:t∣u1:t,z1:t)bel(x_{0:t}) = p(x_{0:t} \mid u_{1:t}, z_{1:t})bel(x0:t​)=p(x0:t​∣u1:t​,z1:t​)

Notice that only state has value at t=0t=0t=0 because first control action and measurement data are applied at t=1t = 1t=1.

Using the same technique from Bayes' filter derivation.

p(x0:t∣z1:t,u1:t)=η  p(zt∣x0:t,z1,t−1,u1:t)  p(x0:t∣z1,t−1,u1:t)p(x_{0:t} \mid z_{1:t}, u_{1:t}) = \eta\;p(z_t \mid x_{0:t}, z_{1,t-1}, u_{1:t}) \; p(x_{0:t} \mid z_{1, t-1}, u_{1:t}) p(x0:t​∣z1:t​,u1:t​)=ηp(zt​∣x0:t​,z1,t−1​,u1:t​)p(x0:t​∣z1,t−1​,u1:t​)

Apply Markov assumption, that xtx_txt​ is complete and no variables prior to xtx_txt​ may influence the stochastic evolution of future states. (not yet completed)

Markov→η  p(zt∣xt)  p(x0:t∣z1,t−1,u1:t)=η  p(zt∣xt)  p(xt∣x0:t−1,z1:t−1,u1:t)  p(x0:t−1∣z1:t−1,u1:t)=η  p(zt∣xt)  p(xt∣xt−1,ut)  p(x0:t−1∣z1:t−1,u1:t−1)\text{Markov} \to \eta\;p(z_t \mid x_{t}) \; p(x_{0:t} \mid z_{1, t-1}, u_{1:t}) \\ = \eta\; p(z_t \mid x_t)\; p(x_t \mid x_{0:t-1}, z_{1:t-1}, u_{1:t})\; p(x_{0:t-1} \mid z_{1:t-1}, u_{1:t}) \\= \eta\; p(z_t \mid x_t) \; p(x_t \mid x_{t-1}, u_t) \; p(x_{0:t-1} \mid z_{1:t-1}, u_{1:t-1})Markov→ηp(zt​∣xt​)p(x0:t​∣z1,t−1​,u1:t​)=ηp(zt​∣xt​)p(xt​∣x0:t−1​,z1:t−1​,u1:t​)p(x0:t−1​∣z1:t−1​,u1:t​)=ηp(zt​∣xt​)p(xt​∣xt−1​,ut​)p(x0:t−1​∣z1:t−1​,u1:t−1​)

PreviousHistogram FilterNextRobot Perception

Last updated 5 years ago

Was this helpful?