Abstract
The elementary approach to the derivation of the optimal Kalman filtering process discussed in Chapter 2 has the advantage that the optimal estimate \( {\hat{\mathbf{x}}_k = \hat{\mathbf{x}}_{k|k} } \) of the state vector x k is easily understood to be a least-squares estimate of x k with the properties that (i) the transformation that yields \( \hat{\mathbf{x}}_kk \) from the data \( { \overline{\mathbf{v}}_k = [\mathbf{v}_0^{\mathsf{T}} \cdots \mathbf{v}_k^{\mathsf{T}}]^{\mathsf{T}} } \) is linear, (ii) \( { \hat{\mathbf{x}}_k } \) is unbiased in the sense that \( {E(\hat{\mathbf{x}}_k) = E(\mathbf{x}_k) } \), and (iii) it yields a minimum variance estimate with \( { (\mathit{Var} (\overline{\underline{\epsilon}}_{k,k}))^{-1} } \) as the optimal weight. The disadvantage of this elementary approach is that certain matrices must be assumed to be nonsingular. In this chapter, we will drop the nonsingularity assumptions and give a rigorous derivation of the Kalman filtering algorithm.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
(2009). Orthogonal Projection and Kalman Filter. In: Kalman Filtering. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87849-0_3
Download citation
DOI: https://doi.org/10.1007/978-3-540-87849-0_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-87848-3
Online ISBN: 978-3-540-87849-0
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)