Zusammenfassung
Während Machine Learning (ML) in vielen Domänen sehr gut funktioniert, wie die Leistung selbstfahrender Autos zeigt, bergen vollautomatisierte ML-Methoden in komplexen Domänen die Gefahr der Modellierung von Artefakten. Ein Beispiel für eine komplexe Domäne ist die Biomedizin, wo wir mit hochdimensionalen, probabilistischen und unvollständigen Datenmengen konfrontiert sind. In solchen Problemstellungen kann es vorteilhaft sein, nicht auf menschliches Domänenwissen zu verzichten, sondern vielmehr menschliche Intelligenz und ML zu kombinieren.
References
Atzmüller M, Baumeister J, Puppe F (2006) Introspective Subgroup Analysis for Interactive Knowledge Refinement FLAIRS Nineteenth International Florida Artificial Intelligence Research Society Conference. AAIS press, pp 402–407
Brochu E, Freitas ND, Ghosh A (2007) Active Preference Learning with Discrete Choice Data. In: Platt JC, Koller D, Singer Y, Roweis ST (eds) Advances in Neural Information Processing Systems 20, NIPS 2007, pp 409–416
Fürnkranz J, Hüllermeier E, Cheng W, Park SH (2012) Preference-based reinforcement learning: a formal framework and a policy iteration algorithm. Machine Learning 89(1–2):123–156
Holzinger A (2014) Trends in Interactive Knowledge Discovery for Personalized Medicine: Cognitive Science meets Machine Learning. IEEE Intell Inform Bull 15(1):6–14
Hund M, Sturm W, Schreck T, Ullrich T, Keim D, Majnaric L, Holzinger A (2015) Analysis of Patient Groups and Immunization Results Based on Subspace Clustering. In: Guo Y, Friston K, Aldo F, Hill S, Peng H (eds) Brain Informatics and Health, Lecture Notes in Artificial Intelligence LNAI 9250. Springer, Heidelberg, pp 358–368
Kieseberg P, Schantl J, Frühwirt P, Weippl E, Holzinger A (2015) Witnesses for the Doctor in the Loop. In: Guo Y, Friston K, Aldo F, Hill S, Peng H (eds) Brain Informatics and Health, Lecture Notes in Artificial Intelligence LNAI 9250. Springer, Heidelberg, pp 369–378
Littman ML (2015) Reinforcement learning improves behaviour from evaluative feedback. Nature 521(7553):445–451
Miettinen P (2014) Interactive Data Mining Considered Harmful (If Done Wrong). ACM SIGKDD Workshop on Interactive Data Exploration and Analytics, pp 85–87
Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533
Müller E, Assent I, Krieger R, Jansen T, Seidl T (2008) Morpheus: Interactive Exploration of Subspace Clustering. Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 1089–1092
Samuel AL (1959) Some studies in machine learning using the game of checkers. IBM J Res Dev 3(3):210–229
Thurstone LL (1927) A law of comparative judgment. Psychol Rev 34(4):273–286
Tran-Thanh L, Stein S, Rogers A, Jennings NR (2014) Efficient crowdsourcing of unknown experts using bounded multi-armed bandits. Artif Intell 214:89–111
Turing AM (1950) Computing machinery and intelligence. Mind 59(236):433–460
Yue Y, Joachims T (2009) Interactively Optimizing Information Retrieval Systems as a Dueling Bandits Problem. Proceedings of the 26th Annual International Conference on Machine Learning (ICML), pp 1201–1208
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Holzinger, A. Interactive Machine Learning (iML). Informatik Spektrum 39, 64–68 (2016). https://doi.org/10.1007/s00287-015-0941-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00287-015-0941-6