Abstract
The classical paradigm for data modeling invariably assumes that an input/output partitioning of the data is a priori given. For linear models, this paradigm leads to computational problems of solving approximately overdetermined systems of linear equations. Examples of most simple data fitting problems, however, suggest that the a priori fixed input/output partitioning of the data may be inadequate: (1) the fitting criteria often depend implicitly on the choice of the input and output variables, which may be arbitrary, and (2) the resulting computational problems are ill-conditioned in certain cases. An alternative paradigm for data modeling, sometimes refered to as the behavioral paradigm, does not assume a priori fixed input/output partitioning of the data. The corresponding computational problems involve approximation of a matrix constructed from the data by another matrix of lower rank. The chapter proceeds with review of applications in systems and control, signal processing, computer algebra, chemometrics, psychometrics, machine learning, and computer vision that lead to low rank approximation problems. Finally, generic methods for solving low rank approximation problems are outlined.
The very art of mathematics is to say the same thing another way.
Unknown
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adcock R (1877) Note on the method of least squares. The Analyst 4:183–184
Adcock R (1878) A problem in least squares. The Analyst 5:53–54
Alter O, Golub GH (2006) Singular value decomposition of genome-scale mRNA lengths distribution reveals asymmetry in RNA gel electrophoresis band broadening. Proc Natl Acad Sci 103:11828–11833
Bishop C (2006) Pattern recognition and machine learning. Springer, Berlin
Botting B (2004) Structured total least squares for approximate polynomial operations. Master’s thesis, School of Computer Science, University of Waterloo
Buckheit J, Donoho D (1995) Wavelab and reproducible research. In: Wavelets and statistics. Springer, Berlin/New York
Byers R (1988) A bisection method for measuring the distance of a stable matrix to the unstable matrices. SIAM J Sci Stat Comput 9(5):875–881
Carroll R, Ruppert D, Stefanski L (1995) Measurement error in nonlinear models. Chapman & Hall/CRC, London
Chandrasekaran S, Golub G, Gu M, Sayed A (1998) Parameter estimation in the presence of bounded data uncertainties. SIAM J Matrix Anal Appl 19:235–252
Cheng C, Van Ness JW (1999) Statistical regression with measurement error. Arnold, London
Ding C, He X (2004) K-means clustering via principal component analysis. In: Proc int conf machine learning, pp 225–232
Dominik C (2010) The org mode 7 reference manual. Network theory ltd, URL http://orgmode.org/
Eckart G, Young G (1936) The approximation of one matrix by another of lower rank. Psychometrika 1:211–218
El Ghaoui L, Lebret H (1997) Robust solutions to least-squares problems with uncertain data. SIAM J Matrix Anal Appl 18:1035–1064
Fierro R, Jiang E (2005) Lanczos and the Riemannian SVD in information retrieval applications. Numer Linear Algebra Appl 12:355–372
Gander W, Golub G, Strebel R (1994) Fitting of circles and ellipses: least squares solution. BIT 34:558–578
Gleser L (1981) Estimation in a multivariate “errors in variables” regression model: large sample results. Ann Stat 9(1):24–44
Graillat S (2006) A note on structured pseudospectra. J Comput Appl Math 191:68–76
Halmos P (1985) I want to be a mathematician: an automathography. Springer, Berlin
Higham N (1989) Matrix nearness problems and applications. In: Gover M, Barnett S (eds) Applications of matrix theory. Oxford University Press, Oxford, pp 1–27
Hinrichsen D, Pritchard AJ (1986) Stability radius for structured perturbations and the algebraic Riccati equation. Control Lett 8:105–113
Jackson J (2003) A user’s guide to principal components. Wiley, New York
Jolliffe I (2002) Principal component analysis. Springer, Berlin
Karmarkar N, Lakshman Y (1998) On approximate GCDs of univariate polynomials. J Symb Comput 26:653–666
Kiers H (2002) Setting up alternating least squares and iterative majorization algorithms for solving various matrix optimization problems. Comput Stat Data Anal 41:157–170
Kim H, Park H (2007) Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis. Bioinformatics 23:1495–1502
Knuth D (1984) Literate programming. Comput J 27(2):97–111
Knuth D (1986) Computers & typesetting, Volume B: TeX: The program. Addison-Wesley, Reading
Knuth D (1992) Literate programming. Cambridge University Press, Cambridge
Koopmans T (1937) Linear regression analysis of economic time series. DeErven F Bohn
Kovacevic J (2007) How to encourage and publish reproducible research. In: Proc IEEE int conf acoustics, speech signal proc, pp 1273–1276
Krim H, Viberg M (1996) Two decades of array signal processing research. IEEE Signal Process Mag 13:67–94
Kumaresan R, Tufts D (1983) Estimating the angles of arrival of multiple plane waves. IEEE Trans Aerosp Electron Syst 19(1):134–139
Ma Y, Soatto S, Kosecká J, Sastry S (2004) An invitation to 3-D vision. Interdisciplinary applied mathematics, vol 26. Springer, Berlin
Madansky A (1959) The fitting of straight lines when both variables are subject to error. J Am Stat Assoc 54:173–205
Pearson K (1901) On lines and planes of closest fit to points in space. Philos Mag 2:559–572
Polderman J, Willems JC (1998) Introduction to mathematical systems theory. Springer, New York
Ramsey N (1994) Literate programming simplified. IEEE Softw 11:97–105
Rump S (2003) Structured perturbations, Part I: Normwise distances. SIAM J Matrix Anal Appl 25:1–30
Schölkopf B, Smola A, Müller K (1999) Kernel principal component analysis. MIT Press, Cambridge, pp 327–352
Shawe-Taylor J, Cristianini N (2004) Kernel methods for pattern analysis. Cambridge University Press, Cambridge
Stewart GW (1993) On the early history of the singular value decomposition. SIAM Rev 35(4):551–566
Tipping M, Bishop C (1999) Probabilistic principal component analysis. J R Stat Soc B 61(3):611–622
Tomasi C, Kanade T (1993) Shape and motion from image streames: a factorization method. Proc Natl Acad Sci USA 90:9795–9802
Trefethen LN, Embree M (1999) Spectra and pseudospectra: the behavior of nonnormal matrices and operators. Princeton University Press, Princeton
Vichia M, Saporta G (2009) Clustering and disjoint principal component analysis. Comput Stat Data Anal 53:3194–3208
Wentzell P, Andrews D, Hamilton D, Faber K, Kowalski B (1997) Maximum likelihood principal component analysis. J Chemom 11:339–366
Willems JC (1986a) From time series to linear system—Part I. Finite dimensional linear time invariant systems. Automatica 22:561–580
Willems JC (1986b) From time series to linear system—Part II. Exact modelling. Automatica 22:675–694
Willems JC (1987) From time series to linear system—Part III. Approximate modelling. Automatica 23:87–115
Willems JC (1989) Models for dynamics. Dyn Rep 2:171–269
Willems JC (1991) Paradigms and puzzles in the theory of dynamical systems. IEEE Trans Autom Control 36(3):259–294
Willems JC (2007) The behavioral approach to open and interconnected systems: modeling by tearing, zooming, and linking. IEEE Control Syst Mag 27:46–99
York D (1966) Least squares fitting of a straight line. Can J Phys 44:1079–1086
Zhang Z (1997) Parameter estimation techniques: a tutorial with application to conic fitting. Image Vis Comput 15(1):59–76
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2012 Springer-Verlag London Limited
About this chapter
Cite this chapter
Markovsky, I. (2012). Introduction. In: Low Rank Approximation. Communications and Control Engineering. Springer, London. https://doi.org/10.1007/978-1-4471-2227-2_1
Download citation
DOI: https://doi.org/10.1007/978-1-4471-2227-2_1
Publisher Name: Springer, London
Print ISBN: 978-1-4471-2226-5
Online ISBN: 978-1-4471-2227-2
eBook Packages: EngineeringEngineering (R0)