Abstract
The chapter treats total least squares (TLS), which in statistics corresponds to orthogonal regression. Some different extensions are discussed, including ways to show how uncertainties in different matrix elements may be related or correlated. The application of TLS to identification of dynamic systems is also treated.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix
11.A Further Details
11.1.1 11.A.1 The Eckart–Young–Mirsky Lemma
There is a neat result, due to Eckart and Young (1936) and Mirsky (1960), on the optimal low-rank approximation of a given matrix. It will be useful when deriving the solution to the TLS problem.
Lemma 11.3
Consider an \(m \times n\) matrix \(\mathbf {C}\) with \(m \ge n\). Let \(\mathbf {C}\) have a singular value decomposition
where \(\varvec{\varSigma }_1\) is an \(r \times r\) matrix containing the r largest singular values, and the other matrices have compatible dimensions.
The matrix \(\hat{\mathbf {C}}\) defined as
is given by
Further,
Proof
See Eckart and Young (1936). \(\blacksquare \)
11.1.2 11.A.2 Characterization of the TLS Solution
11.1.2.1 11.A.2.1 Proof of Lemma 11.1
Set
One needs to find \(\varDelta \mathbf {C}\) with minimal norm, such that
By applying Lemma 11.3, one gets \(r = n-1\) and
and the lemma follows directly.
11.1.2.2 11.A.2.2 Proof of Remark 11.3
First establish \(\mathbf {V}_1^T \mathbf U= \mathbf {0}\) and
Using the definition (11.5) of \(\mathbf {C}\) it thus holds
Spelling out the lower part of this equation gives
which proves (11.7).
11.1.3 11.A.3 Proof of Lemma 11.2
To examine the optimization problem, introduce the Lagrange multiplier vector \(\varvec{\lambda }\) and the Lagrange function
Setting the gradient of L with respect to \(\varvec{\eta }\) to zero leads to
Considering the constraint (11.21) (or setting the gradient of L with respect to \(\varvec{\lambda }\) to zero) leads to
Therefore
which always satisfies the constraint (11.21). Furthermore, simple algebra shows that the minimal value of the loss function (with respect \(\varvec{\eta }\)) becomes
which shows (11.23) and completes the proof.
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Söderström, T. (2018). Total Least Squares. In: Errors-in-Variables Methods in System Identification. Communications and Control Engineering. Springer, Cham. https://doi.org/10.1007/978-3-319-75001-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-75001-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-75000-2
Online ISBN: 978-3-319-75001-9
eBook Packages: EngineeringEngineering (R0)