Abstract
As described in Sect. 1.1, one of the core problems in music recommendation over time, and in content recommendation in general, is that the distributional properties of music, and people’s musical tastes, change over time in ways that are nontrivial to track and predict. This informational challenge is a special case of concept drift—the change, either abrupt or gradual, in the underlying structure of data. Concept drift is a common and fundamental problem in machine learning in general, so any solutions designed to combat drift in content recommendation, even ones specific for that setting, are broadly applicable to ML research in general. In this chapter I focus on the application of reinforcement learning approaches to handle concept drift, specifically through model retraining, both in its general context and directly with respect to tracking people’s temporal listening habits as reflected by a real world dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
In the literature this method is typically abbreviated as “DDM”, but to avoid confusion with the Drift-Diffusion Model used in Chap. 5, which is also abbreviated as DDM, I use the abbreviation DDetM for the Gamma et al. algorithm instead.
References
F. Schroff, D. Kalenichenko, J. Philbin, Facenet: A unified embedding for face recognition and clustering, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 815–823
C.A. Gomez-Uribe, N. Hunt, The netflix recommender system: algorithms, business value, and innovation. ACM Trans. Manag. Inf. Syst. 6(4), 13 (2015)
M. Chiosi, B. Freeman, AT&T’s SDN controller implementation based on opendaylight. Open Daylight Summit, 7 (2015)
G. Widmer, M. Kubat, Learning in the presence of concept drift and hidden contexts. Mach. Learn. 23(1), 69–101 (1996)
A. Tsymbal, The problem of concept drift: definitions and related work. Comput. Sci. Dep. Trinity Collage Dublin 106, 58 (2004)
I. Žliobaitė, Learning under concept drift: an overview. arXiv:1010.4784 (2010)
J. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, A. Bouchachia, A survey on concept drift adaptation. ACM Comput. Surv. (CSUR) 46(4), 44 (2014)
R. Klinkenberg, T. Joachims, Detecting concept drift with support vector machines, in ICML (2000), pp. 487–494
J. Gama, P. Medas, G. Castillo, P. Rodrigues, Learning with drift detection, in Brazilian Symposium on Artificial Intelligence (Springer, 2004), pp. 286–295
D. Brzezinski, J. Stefanowski, Reacting to different types of concept drift: the accuracy updated ensemble algorithm. IEEE Trans. Neural Netw. Learn. Syst. 25(1), 81–94 (2014)
I. Frías-Blanco, J. del Campo-Ávila, G. Ramos-Jiménez, R. Morales-Bueno, A. Ortiz-Díaz, Y. Caballero-Mota, Online and non-parametric drift detection methods based on Hoeffding’s bounds. IEEE Trans. Knowl. Data Eng. 27(3), 810–823 (2015)
L.L. Minku, X. Yao, DDD: a new ensemble approach for dealing with concept drift. IEEE Trans. Knowl. Data Eng. 24(4), 619–633 (2012)
J. Kivinen, A.J. Smola, R.C. Williamson, Online learning with kernels. IEEE Trans. Signal Process. 52(8), 2165–2176 (2004)
P. Ruvolo, E. Eaton, ELLA: an efficient lifelong learning algorithm. ICML 1(28), 507–515 (2013)
S. Singh, R.L. Lewis, A.G. Barto, J. Sorg, Intrinsically motivated reinforcement learning: an evolutionary perspective. IEEE Trans. Auton. Ment. Dev. 2(2), 70–82 (2010)
M.B. Ring, Continual learning in reinforcement environments. Ph.D. thesis, University of Texas at Austin, 1994
M.E. Taylor, P. Stone, Transfer learning for reinforcement learning domains: a survey. J. Mach. Learn. Res. 10(1), 1633–1685 (2009)
L. Torrey, J. Shavlik, Transfer learning. Handb. Res. Mach. Learn. Appl. Trends: Algorithms Methods Tech. 1, 242 (2009)
S.J. Pan, Q. Yang, A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)
E. Liebman, E. Zavesky, P. Stone, A stitch in time: autonomous model management via reinforcement learning, in Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems (2018)
R.S. Sutton, A.G. Barto, Introduction to Reinforcement Learning, 1st edn. (MIT Press, Cambridge, MA, USA, 1998)
C.-S. Chow, J.N. Tsitsiklis, et al. An optimal multigrid algorithm for discrete-time stochastic control (1989)
W.G. Cochran, Sampling Techniques (Wiley, New York, 1977
G.J. Gordon, Stable function approximation in dynamic programming, in Proceedings of the Twelfth International Conference on Machine Learning (1995), pp. 261–268
A. Jansson, C. Raffel, T. Weyde, This is my jam–data dump
T. Bertin-Mahieux, D.P. Ellis, B. Whitman, P. Lamere, The million song dataset, in ISMIR, vol. 2, (2011), p. 10
S. Lawrence, C.L. Giles, A.C. Tsoi, A.D. Back, Face recognition: a convolutional neural-network approach. IEEE Trans. Neural Netw. 8(1), 98–113 (1997)
C. Szepesvári, R. Munos, Finite time bounds for sampling based fitted value iteration, in Proceedings of the 22nd International Conference on Machine Learning (ACM, 2005), pp. 880–887
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Liebman, E. (2020). Algorithms for Tracking Changes in Preference Distributions. In: Sequential Decision-Making in Musical Intelligence. Studies in Computational Intelligence, vol 857. Springer, Cham. https://doi.org/10.1007/978-3-030-30519-2_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-30519-2_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30518-5
Online ISBN: 978-3-030-30519-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)