Abstract
We design algorithms for two online variance minimization problems. Specifically, in every trial t our algorithms get a covariance matrix \({\mathcal{C}}_t\) and try to select a parameter vector w t such that the total variance over a sequence of trials \(\sum_t {\boldsymbol{w}}_t^{\top}{\mathcal{C}}_t{\boldsymbol{w}}_t\) is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces are considered – the probability simplex and the unit sphere. The first space is associated with the problem of minimizing risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue. For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy. In the second case the algorithm maintains a mixture of unit vectors which is represented as a density matrix. The motivating divergence for density matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated Gradient algorithm. In each case we prove bounds on the additional total variance incurred by the online algorithm over the best offline parameter.
Supported by NSF grant CCR 9821087. Some of this work was done while visiting National ICT Australia in Canberra.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Bousquet, O., Warmuth, M.K.: Tracking a small set of experts by mixing past posteriors. J. of Machine Learning Research 3, 363–396 (2002)
Cesa-Bianchi, N., Mansour, Y., Stoltz, G.: Improved second-order bounds for prediction with expert advice. In: Auer, P., Meir, R. (eds.) COLT 2005. LNCS, vol. 3559, pp. 217–232. Springer, Heidelberg (2005)
Cover, T.M.: Universal portfolios. Mathematical Finance 1(1), 1–29 (1991)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)
Cristianini, N., Shawe-Taylor, J., Kandola, J.: Spectral kernel methods for clustering. In: Advances in Neural Information Processing Systems 14, pp. 649–655. MIT Press, Cambridge (2001)
Helmbold, D., Schapire, R.E., Singer, Y., Warmuth, M.K.: On-line portfolio selection using multiplicative updates. Mathematical Finance 8(4), 325–347 (1998)
Herbster, M., Warmuth, M.K.: Tracking the best expert. Journal of Machine Learning 32(2), 151–178 (1998)
Kivinen, J., Warmuth, M.K.: Additive versus exponentiated gradient updates for linear prediction. Information and Computation 132(1), 1–64 (1997)
Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. Information and Computation 108(2), 212–261 (1994)
Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information. Cambridge University Press, Cambridge (2000)
Tsuda, K., Rätsch, G., Warmuth, M.K.: Matrix exponentiated gradient updates for on-line learning and Bregman projections. Journal of Machine Learning Research 6, 995–1018 (2005)
Warmuth, M.K.: Bayes rule for density matrices. In: Advances in Neural Information Processing Systems 18 (NIPS 2005), December 2005, MIT Press, Cambridge (2005)
Warmuth, M.K., Kuzmin, D.: A Bayesian probability calculus for density matrices (March 2006) (unpublished manuscript)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Warmuth, M.K., Kuzmin, D. (2006). Online Variance Minimization. In: Lugosi, G., Simon, H.U. (eds) Learning Theory. COLT 2006. Lecture Notes in Computer Science(), vol 4005. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11776420_38
Download citation
DOI: https://doi.org/10.1007/11776420_38
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-35294-5
Online ISBN: 978-3-540-35296-9
eBook Packages: Computer ScienceComputer Science (R0)