Abstract
The elementary approach to the derivation of the optimal Kalman filtering process discussed in Chapter 2 has the advantage \({\hat X_k} = {\hat X_{k/k}}\) of the state vector X k is easily understood to be a least-squares estimate of X k with the properties that (i) the transformation that yields \({\hat X_k}\) from the data \(E({\hat x_k}) = E(xk)\) is linear, (ii) \({\hat X_k}\) is unbiased in the sense that \({\left( {Var\left( {{{\overline {\underline \in } }_{k,k}}} \right)} \right)^{ - 1}}\), and (iii) it yields a minimum variance estimate with \({\bar v_k} = {[v_0^T...v_k^T]^T}\) as the optimal weight. The disadvantage of this elementary approach is that certain matrices must be assumed to be nonsingular. In this chapter, we will drop the nonsingularity assumptions and give a rigorous derivation of the Kalman filtering algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Chui, C.K., Chen, G. (1999). Orthogonal Projection and Kalman Filter. In: Kalman Filtering. Springer Series in Information Sciences, vol 17. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-03859-8_3
Download citation
DOI: https://doi.org/10.1007/978-3-662-03859-8_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64611-2
Online ISBN: 978-3-662-03859-8
eBook Packages: Springer Book Archive