Notes on Chapter 1 of Maybeck 1979, Stochastic Models, Estimation, and Control.
1.1: why stochastic models, estimation, and control?
A math model isn’t perfect, and parameters are not known absolutely. Sensors don’t provide perfect data either. Given uncertainties, you still want to be able to estimate quantities of interest and control the system.
1.2: Overview of the text
skip.
1.3: The Kalman filter: an introduction to concepts
The Kalman filter is an “optimal linear estimator”. Given (1) knowledge of the system and the measurement device, (2) a statistical description of noise and error, and (3) initial condition info, a Kalman filter can combine the knowledge to create an estimate. When the system is described by a linear model, and the noise is Gaussian and is white noise, the Kalman filter is the best estimator.
1.4 Basic assumptions
The model is linear, the noise is white noise, and the error is Gaussian/normally distributed.
1.5 A simple example
A static problem: Trying to determine your position, you have a single measurement (z1) and its precision/standard deviation. This leads to a conditional probability density (conditioned on your measurement). The best estimate is currently z1.
A friend takes a second measurement, z2, with smaller variance.
The best estimate will now be a combo of these two that takes into account the precision/variance on each.
There is a predictor-corrector structure to the way the estimate is made: we can take the previous best estimate and associated standard deviation and then “correct” it with the new data and new standard deviation.
A dynamic problem: Now add a motion model. This evolves the pdf forward in time. (And adds noise so will increase the standard deviation). This creates a new estimate and variance. Then take a measurement and use the corrector process from above.