`*kf`

can automatically build tests for most filter settings in order to show how to use the generated filter and to provide a notion of what the filter's performance will be when the true system is exactly as described in the filter propagation and measurement functions and noise covariances.

To generate these tests, check the Generate Example box under Tests.

The generated example code shows:

- How to integrate the filter into a simulation
- How well the filter performs when the dynamics in the filter are used as the true dynamics

The code first initializes the estimate, covariance, and any filter constants. It then generates a true initial state, which will be displaced from the estimate by a random amount with the appropriate covariance.

Then, the code enters a simulation loop at the specified time step. At each step, the true state is propagated (using whatever method of propagation the filter uses), and this true propagation is perturbed by process noise.

The code then creates a measurement from the true state with measurement noise.

The measurement and prior filter state are provided to the filter, which updates and returns the new estimate and covariance.

The loop then continues.

When the end time is reached, the true state and filtered state are plotted on top of each other. The truth, estimates, and covariance are output.

`*kf`

can also generate a Monte-Carlo wrapper for the simulation. This wrapper will run the above example in a loop some number of times. On each iteration, it will set a different random number seed (this will force each individual run to be different from each other run, but the entire set of runs will be the same each time, and each run can be individually recreated). It will record the errors over time and the covariance.

When all of the runs are completed, the wrapper will produce a set of statistics to determine if the filter has worked as expected -- that is, if the true errors match the errors predicted by the covariance. This will be presented as a series of plots. These plots are documented below, and their names and conventions closely follow the suggestions of *Estimation with Applications to Tracking and Navigation* by Bar-Shalom, Li, and Kirubarajan, chapter 5, section 4: Consistency of State Estimators.

This plot shows whether the estimate error is consistent with the covariance matrix over time. For each sample of each run, the normalized error is computed:

$$\left(x_k - \hat{x}_k\right)^T P_k \left(x_k - \hat{x}_k\right)$$This normalized error is averaged across each run for the current sample. The results are shown over time.

These normalized errors should have a \(\chi^2\) distribution with a number of degrees of freedom determined by the size of the state and number of runs. We can therefore calculate the the theoretical 95% confidence bounds, and we can see if, indeed, 95% of the errors are within this bound. We can also calculate the empirical 95% confidence bounds -- the bounds actually containing 95% of the data. The bounds will be drawn on the plot, and the percentage will be printed to the command line.

Similar to the above test, this test shows if filter errors are consistent with the covariance matrix, but it shows the results for each individual state. The errors in this case are calculated as:

$$\left(x_{i,k} - \hat{x}_{i,k}\right) / \sqrt{P_{i,i,k}}$$where \(P_{i,i,k}\) is the \(i\)th diagonal of the covariance matrix for the \(k\)th sample and similarly \(x_{i,k}\) is the \(i\)th element of the state/state estimate for sample \(k\). Again, these errors are averaged across runs for each sample.

Using this plot, it's easy to determine which state errors are causing inconsistencies.

The errors in this case should be Gaussian. Similarly, theoretical and empirical 95% confidence bounds are determined, and the percentage of errors within the theoretical bounds is printed to the command window.

This test is the same as the normalized estimate error squared, but is produced for the innovation vector and corresponding innovation covariance.

This plot is only produced when the filter outputs both the innovation vector and innovation covariance.

This plot shows the overall autocorrelation of the innovation vector for every number of samples of lag, from 1 to up the number of samples of the simulation, along with confidence bounds. This is used to test the estimator's “whiteness” — the tendency to produced zero-mean error measurements without discernable frequency content. No discrete signal is purely white, of course, so the degree of consistency that's required is a matter of judgement.

This plot is only produced when the filter output the innovation vector.

This plot shows the autocorrelation of each element of the innovation vector over time, for a lag of 1 sample, 2 samples, 3 samples, etc., to 10 samples. This helps determine which element of the predicted measurement is failing the whiteness test and at what time (for up to 10 samples of lag).

This plot is only produced when the filter output the innovation vector.

Check this box to generate any of the tests. This is the single simulation described above. It will output the true state over time, the estimate over time, the covariance over time, and the innovation and innovation covariance, if the filter outputs these. The states will be `nx`

-by-`ns`

(number of states by number of samples) and the covariance will be `nx`

-by-`nx`

-by-`ns`

.

Enter the time step the simulation should use.

Enter the number of steps to simulate. Note that the initialization is included in the output time histories, so for a 10 state vector run for 250 steps, the output time histories will be 10-by-251.

If an input vector is passed to any propagation or measurement functions (according to the checkboxes), then the simulation will need an input vector to use. It can use a constant value or can pass the state estimate to a function on each sample in order to determine the input vector. For a constant, provide the value (can be a variable in the workspace). For a function, provide the name of the function. The interface will be:

```
u_km1 = u_fcn(t_km1, x_km1, ...);
```

Check this box to generate the MC wrapper described above.

This is the number of simulations to run to build up the statistical properties of the filter. More runs result in better accuracy.

`*kf`

v1.0.3 January 18th, 2017

©2017 An Uncommon Lab