Learning with Missing or Incomplete Data
The problem of learning with missing or incomplete data has received a lot of attention in the literature [6,10,13,21,23]. The reasons for missing data can be multi-fold ranging from sensor failures in engineering applications to deliberate withholding of some information in medical questioners in the case of missing input feature values or lack of solved (labelled) cases required in supervised learning algorithms in the case of missing labels. And though such problems are very interesting from the practical and theoretical point of view, there are very few pattern recognition techniques which can deal with missing values in a straightforward and efficient manner. It is in a sharp contrast to the very efficient way in which humans deal with unknown data and are able to perform various pattern recognition tasks given only a subset of input features or few labelled reference cases.
In the context of pattern recognition or classification systems the problem of missing labels and the problem of missing features are very often treated separately.
The availability or otherwise of labels determines the type of the learning algorithm that can be used and has led to the well known split into supervised, unsupervised or more recently introduced hybrid/semi-supervised classes of learning algorithms.
Commonly, using supervised learning algorithms enables designing of robust and well performing classifiers. Unfortunately, in many real world applications labelling of the data is costly and thus possible only to some extent. Unlabelled data on the other hand is often available in large quantities but a classifier built using unsupervised learning is likely to demonstrate performance inferior to its supervised counterpart. The interest in a mixed supervised and unsupervised learning is thus a natural consequence of this state of things and various approaches have been discussed in the literature [2,5,10,12,14,15,18,19]. Our experimental results have shown  that when supported by unlabelled samples much less labelled data is generally required to build a classifier without compromising the classification performance. If only a very limited amount of labelled data is available the results based on random selection of labelled samples show high variability and the performance of the final classifier is more dependent on how reliable the labelled data samples are rather than use of additional unlabelled data. This points to a very interesting discussion point related to the issue of the trade-off between the information content in the observed data (in this case available labels) versus the impact that can be achieved by employing sophisticated data processing algorithms which we will also revisit when discussing approaches dealing with missing feature values.
KeywordsIncomplete Data Unsupervised Learning Label Data Unlabelled Data Neural Information Processing System
- 3.Budka, M., Gabrys, B.: Electrostatic Field Classifier for Deficient Data. In: The sixth International Conference on Computer Recognition Systems, Jelenia Góra, Poland, May 25-28 (2009a)Google Scholar
- 4.Budka, M., Gabrys, B.: Mixed supervised and unsupervised learning from incomplete data using a physical field model. Natural Computing (submitted, 2009)Google Scholar
- 5.Dara, R., Kremer, S., Stacey, D.: Clustering unlabeled data with SOMs improves classification of labeled real-world data. In: Proceedings of the World Congress on Computational Intelligence, WCCI (2002)Google Scholar
- 11.Ghahramani, Z., Jordan, M.: Supervised learning from incomplete data via an EM approach. In: Cowan, J.D., Tesauro, G., Alspector, J. (eds.) Advances in Neural Information Processing Systems, vol. 6, pp. 120–127 (1994)Google Scholar
- 12.Goldman, S., Zhou, Y.: Enhancing supervised learning with unlabeled data. In: Proceedings of ICML (1998)Google Scholar
- 13.Graham, J., Cumsille, P., Elek-Fisk, E.: Methods for handling missing data. Handbook of psychology 2, 87–114 (2003)Google Scholar
- 14.Kothari, R., Jain, V.: Learning from labeled and unlabeled data. In: Proceedings of the 2002 International Joint Conference on Neural Networks, 2002. IJCNN 2002, vol. 3 (2002); Loss, D., Di Vincenzo, D.: Quantum computation with quantum dots. Physical Review A 57 (1), 120–126 (1998)Google Scholar
- 15.Mitchell, T.: The role of unlabeled data in supervised learning. In: Proceedings of the Sixth International Colloquium on Cognitive Science (1999)Google Scholar
- 16.Nauck, D., Kruse, R.: Learning in neuro-fuzzy systems with symbolic attributes and missing values. In: Proceedings of the International Conference on Neural Information Processing – ICONIP 1999, Perth, pp. 142–147 (1999)Google Scholar
- 18.Nigam, K., Ghani, R.: Understanding the behavior of co-training. In: Proceedings of KDD 2000 Workshop on Text Mining (2000)Google Scholar
- 24.Tresp, V., Ahmad, S., Neuneier, R.: Training neural networks with deficient data. Advances in Neural Information Processing Systems 6, 128–135 (1994)Google Scholar