Advertisement

Automatic Subclasses Estimation for a Better Classification with HNNP

  • Ruth Janning
  • Carlotta Schatten
  • Lars Schmidt-Thieme
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8502)

Abstract

Although nowadays many artificial intelligence and especially machine learning research concerns big data, there are still a lot of real world problems for which only small and noisy data sets exist. Applying learning models to those data may not lead to desirable results. Hence, in a former work we proposed a hybrid neural network plait (HNNP) for improving the classification performance on those data. To address the high intraclass variance in the investigated data we used manually estimated subclasses for the HNNP approach. In this paper we investigate on the one hand the impact of using those subclasses instead of the main classes for HNNP and on the other hand an approach for an automatic subclasses estimation for HNNP to overcome the expensive and time consuming manual labeling. The results of the experiments with two different real data sets show that using automatically estimated subclasses for HNNP delivers the best classification performance and outperforms also single state-of-the-art neural networks as well as ensemble methods.

Keywords

Image classification subclasses convolutional neural network multilayer perceptron hybrid neural network small noisy data 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abdel-Hamid, O., Mohamed, A., Jiang, H., Penn, G.: Applying Convolutional Neural Networks concepts to hybrid NN-HMM model for speech recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4277–4280 (2012)Google Scholar
  2. 2.
    Bache, K., Lichman, M.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, CA (2013), http://archive.ics.uci.edu/ml Google Scholar
  3. 3.
    Ciresan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Convolutional Neural Network Committees For Handwritten Character Classification. In: International Conference on Document Analysis and Recognition (2011)Google Scholar
  4. 4.
    Cruz, R.M.O., Cavalcanti, G.D.C., Ren, T.I.: Handwritten Digit Recognition Using Multiple Feature Extraction Techniques and Classifier Ensemble. In: 17th International Conference on Systems, Signals and Image Processing (2010)Google Scholar
  5. 5.
    Janning, R., Horváth, T., Busche, A., Schmidt-Thieme, L.: GamRec: A Clustering Method Using Geometrical Background Knowledge for GPR Data Preprocessing. In: Iliadis, L., Maglogiannis, I., Papadopoulos, H. (eds.) AIAI 2012. IFIP AICT, vol. 381, pp. 347–356. Springer, Heidelberg (2012)Google Scholar
  6. 6.
    Janning, R., Horváth, T., Busche, A., Schmidt-Thieme, L.: Pipe Localization by Apex Detection. In: Proceedings of the IET International Conference on Radar Systems (Radar 2012), Glasgow, Scotland (2012)Google Scholar
  7. 7.
    Janning, R., Busche, A., Horváth, T., Schmidt-Thieme, L.: Buried Pipe Localization Using an Iterative Geometric Clustering on GPR Data. In: Artificial Intelligence Review. Springer (2013), doi:10.1007/s10462-013-9410-2Google Scholar
  8. 8.
    Janning, R., Schatten, C., Schmidt-Thieme, L.: HNNP – A Hybrid Neural Network Plait for Improving Image Classification with Additional Side Information. In: Proceedings of the IEEE International Conference on Tools With Artificial Intelligence (ICTAI) 2013, Washington DC, USA, pp. 24–29 (2013)Google Scholar
  9. 9.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  10. 10.
    Pettengill, G.H., Ford, P.G., Johnson, W.T.K., Raney, R.K., Soderblom, L.A.: Magellan: Radar Performance and Data Products. Science 252, 260–265 (1991)CrossRefGoogle Scholar
  11. 11.
    Saunders, R.S., Spear, A.J., Allin, P.C., Austin, R.S., Berman, A.L., Chandlee, R.C., Clark, J., Decharon, A.V., Dejong, E.M.: Magellan Mission Summary. Journal of Geophysical Research Planets 97(E8), 13067–13090 (1992)CrossRefGoogle Scholar
  12. 12.
    Sharkey, A.J.C., Sharkey, N.E.: Combining diverse neural nets. The Knowledge Engineering Review 12(3), 231–247 (1997)CrossRefGoogle Scholar
  13. 13.
    Simard, P.Y., Steinkraus, D., Platt, J.: Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis. In: External Link International Conference on Document Analysis and Recognition (ICDAR), pp. 958–962. IEEE Computer Society, Los Alamitos (2003)Google Scholar
  14. 14.
    Ting, K.M., Witten, I.H.: Issues in stacked generalization. Journal of Artificial Intelligence Research 10, 271–289 (1999)zbMATHGoogle Scholar
  15. 15.
    TIMIT Acoustic-Phonetic Continuous Speech Corpus, http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC93S1
  16. 16.
    Wu, J., Xiong, H., Chen, J.: COG: local decomposition for rare class analysis. Data Mining and Knowledge Discovery 20, 191–220 (2010)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Ruth Janning
    • 1
  • Carlotta Schatten
    • 1
  • Lars Schmidt-Thieme
    • 1
  1. 1.Information Systems and Machine Learning Lab (ISMLL)University of HildesheimHildesheimGermany

Personalised recommendations