Particle filters determine the most likely current state by simulating a very large number of different possible states up to the current time, predicting the measurement, and assigning large weights to the predictions that are closest, in some sense, to the real measurement. Then, a weighted sum of all of the predicted states is taken, resulting in an average state that likely represents the underlying true state well. That is, they:
Particle filters almost always work, or manage to predict the state reasonably well compared to other architectures, as their requirements are low. They make no significant assumptions about probability distributions of the process noise, measurement noise, or uncertainty over time. The price for this flexibility is runtime. Particle filters are generally slow in the extreme, requiring thousands of particles, and therefore thousands of simulations on each filter update, for even basic tasks.
The particle filter architecture included in
*kf is a bootstrap filter, a particularly simple and broadly applicable type. It is included for the sake of comparing its results to an unscented filter, for initial feasibility studies, or for actually filtering on a hard problem where runtime is of little concern.
For those who have MATLAB Coder installed, it is a great idea to check the Generate MEX box under Implementation. The MEX will run far faster, and the simulation will use the MEX automatically if it builds successfully.
In order to propagate the particles, the filter will call a user-provided function of the times, state, input, process noise, and any additional user-provided arguments.
x_k = f(t_km1, t_k, x_km1, u_km1, q_km1, ...);
q_km1 is the process noise associated with this particle.
The actual interface will depend on other options selected. See the Interfaces box to see what the filter expects for your chosen options.
When this is selected, the input vector,
u_km1, will be an input to the filter and will be passed to all of the functions in the propagation section, including the propagation function and process noise generator.
Particle filters can use process noise, but they need not assume that it is Gaussian. If none is required, select None, naturally. If the noise is indeed Gaussian with a constant covariance matrix, select Gaussian and provide the matrix. Otherwise, select Function and provide the name of a function to calculate the process noise. The interface will be:
q_km1 = q_fcn(t_km1, t_k, x_km1, u_km1, ...);
The filter will call some user-provided function to form the measurement for each propagated sigma point. The interface will be something like:
z_hat_kkm1 = h(t_k, x_kkm1);
z_hat_kkm1 is the predicted measurement at sample k based on information from k-1 and
x_kkm1 is likewise the predicted state at k given information from k-1.
Note that this particle filter, unlike an unscented filter, does not use measurement noise.
The actual interface will depend on other options. See the Interfaces box to see what the filter expects for your chosen options.
When this is selected, the input vector,
u_km1, will be an input to the filter and will be passed to all of the functions in the observation section, including the observation function and the error probability function.
A particle filter needs to determine the probability or probability density of the measurement error for each particle (measurement minus predicted measurement). If the measurement error is Gaussian with a constant covariance matrix, select Gaussian and provide the matrix. Otherwise, select Function and provide a function to calculate the error probability with the following interface:
pr = pr_fcn(t_k, x_kkm1, z_k, z_hat_k, ...);
This need not return a true probability; all that matters is that the returned value provides a relative weight to use for the particle in order to form a weighted average and for resampling. Therefore, the scale does not matter, and scaling values need not be calculated.
For instance, the probability density (without scaling factor) of a measurement error
y = z - z_hat, where the covariance depends on the state, would look like this:
function p = my_gaussian_pr_fcn(t, x, z, z_hat) y = z - z_hat; R = calculate_covariance(t, x); p = exp(-0.5 * y.' * inv(R) * y); end
Further, when using a custom measurement error probability density function like this, the simulation will need to know how to generate measurement errors. It will therefore use a “true”, noisy observation function instead of the filter's observation function. For the example above, this may look something like this:
function z = my_true_obs_fcn(t, x) R = calculate_covariance(t, x); % Create the covariance matrix. r = bnddraw(R, 1); % Draw from the covariance matrix. z = h(t, x); % Create the nominal observation. z = z + r; % Add the noise. end
Of course, the true observation function can be far more complex than this, with odd error distributions that contribute nonlinearly to the measurement.
Set the number of particles that the filter should use. This is expected to be constant, and if an initial set of particles is provided for Initial Uncertainty under Setup, then the number of particles provided should match.
By default, the bootstrap filter will resample from its set of particles on every update. However, this might not always be required. Check this box to add a
resample input to the filter inputs, allowing the user to specify when resampling should occur.
By default, the bootstrap filter will regularize its particles on every update. However, this might not always be required. Check this box to add a
regularize input to the filter inputs, allowing the user to specify when regularization should occur.
Note that regularization can occur at any time, but should always occur if resampling occurs.
When this option is selected, the filter will have an additional input called
updates. This should be a vector with the same dimension as the measurement and full of booleans specifying which indices of the measurement have been updated since the last step of the filter.
For example, if
z_k is of length 5, and if its first two indices and last index have been updated with new measurements,
updates would be:
[true true false false true]
The updates vector will be passed to the error probability function, which must use it to determine the appropriate probability. To modify our custom Gaussian error model from above to use the updates vector, we'd have:
function p = my_gaussian_pr_fcn(t, x, z, z_hat, updates) y = z(updates) - z_hat(updates); R = calculate_covariance(t, x); p = exp(-0.5 * y.' * inv(R((updates, updates)) * y); end
The tuning parameter controls the spread of mean size of the displacement of each particle during regularization. Choose Gaussian to use a tuning parameter that's optimal for a Gaussian spread of particles, choose constant to specify a value to use for all regularization, or specify Input to provide this value on each call to the filter.