Advertisement

Experimental Evaluation of a Trainable Scribble Recognizer for Calligraphic Interfaces

  • César F. Pimentel
  • Manuel J. da Fonseca
  • Joaquim A. Jorge
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2390)

Abstract

This paper describes a trainable recognizer for hand-drawn sketches using geometric features. We compare three different learning algorithms and select the best approach in terms of cost-performance ratio. The algorithms employ classic machine-learning techniques using a clustering approach. Experimental results show competing performance (95.1%) with the non-trainable recognizer (95.8%) previously developed, with obvious gains in flexibility and expandability. In addition, we study both their classification and learning performance with increasing number of examples per class.

Keywords

Recognition Rate Near Neighbor Solid Shape Inductive Decision Gesture Class 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bishop C. M. Neural Networks for pattern recognition. Oxford, England: Oxford University Press, 1995.Google Scholar
  2. 2.
    Boyce J. E. and Dobkin D. P., Finding Extremal Polygons, SIAM Journal on Computing 14(1), pp. 134–147. Feb. 1985.Google Scholar
  3. 3.
    Cestnik B. Estimating probabilities: A crucial task in machine learning. Proceedings of the Ninth European Conference on Artificial Intelligence (pp. 147–149). London: Pitman, 1990.Google Scholar
  4. 4.
    Cover T. and Hart P. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13, 21–27, 1967.zbMATHCrossRefGoogle Scholar
  5. 5.
    Dasarathy B. V. Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques, Los Alamitos, CA, IEEE Computer Society Press, 1991.Google Scholar
  6. 6.
    Domingos P. and Pazzani M. Beyond independence: Conditions for the optimality of the simple Bayesian classifier. Proceedings of the 13 th International Joint Conference on Machine Learning (pp. 105–112), 1996.Google Scholar
  7. 7.
    Duda R. and Hart P. Pattern classification and scene analysis. New York: John Wiley & Sons, 1973.zbMATHGoogle Scholar
  8. 8.
    Fayyad U. M., On the induction of decision trees for multiple concept learning, (Ph.D. dissertation). EECS Department, University of Michigan, 1991.Google Scholar
  9. 9.
    Fayyad U. M. and Irani K. B. Multi-interval discretization of continuous valued attributes for classification learning. In R. Bajcsy (Ed.), Proceedings of the 13 th International Joint Conference on Artificial Intelligence (pp. 1022–1027). Morgan Kaufmann, 1993.Google Scholar
  10. 10.
    Fonseca M. J. and Jorge J. A. Experimental Evaluation of an On-line Scribble Recognizer, Pattern Recognition Letters 22(12): 1311–1319, 2001.zbMATHCrossRefGoogle Scholar
  11. 11.
    Freeman H. and Shapira R., Determining the minimum-area encasing rectangle for an arbitrary closed curve, Communications of the ACM 18(7), 409–413, July 1975.Google Scholar
  12. 12.
    Apte, A., Vo, V., Kimura, T. D. Recognizing Multistroke Geometric Shapes: An Experimental Evaluation. In Proceedings of UIST’93. Atlanta, GA, 1993.Google Scholar
  13. 13.
    Littlestone N. and Warmuth M. The weighted majority algorithm (Technical report UCSC-CRL-91-28). Univ. of California Santa Cruz, Computer Engineering and Information Sciences Dept., Santa Cruz, CA, 1991.Google Scholar
  14. 14.
    Littlestone N. and Warmuth M. The weighted majority algorithm. Information and Computation (108), 212–261, 1994.zbMATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Malerba D., Floriana E. and Semeraro G. A further comparison of simplification methods for decision tree induction. In D. Fisher & H. Lenz (Eds.), Learning from data: AI and statistics. Springer-Verlag, 1995.Google Scholar
  16. 16.
    Mingers J. An Empirical comparison of pruning methods for decision-tree induction. Machine Learning, 4(2), 227–243, 1989.CrossRefGoogle Scholar
  17. 17.
    O’Rourke J. Computational geometry in C, 2nd edition, Cambridge University Press, 1998.Google Scholar
  18. 18.
    Quinlan J. R. Discovering rules by induction from large collections of examples. In D. Michie (Ed.), Expert systems in the micro electronic age. Edinburgh Univ. Press, 1979.Google Scholar
  19. 19.
    Quinlan J. R. Learning efficient classification procedures and their application to chess end games. R. S. Michalski, J. G. Carbonell and T. M. Mitchell (Eds.), Machine learning: An artificial intelligence approach. San Mateo, CA: Morgan Kaufmann, 1983.Google Scholar
  20. 20.
    Quinlan J. R. Induction of decision trees. Machine Learning, 1(1), 81–106, 1986.Google Scholar
  21. 21.
    Quinlan J. R. Rule induction with statistical data-a comparison with multiple regression. Journal of the Operational Research Society, 38, 347–352, 1987.CrossRefGoogle Scholar
  22. 22.
    Quinlan J. R. C4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann, 1993.Google Scholar
  23. 23.
    Rubine D. H. Specifying Gestures by Example, SIGGRAPH’91 Conference Proceedings, ACM, 1991.Google Scholar
  24. 24.
    Tappert, C. C, Suen, C. Y., Wakahara, T. The state of the art in on-line handwriting recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 12~(8), 787–807, 1990.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • César F. Pimentel
    • 1
  • Manuel J. da Fonseca
    • 1
  • Joaquim A. Jorge
    • 1
  1. 1.Departamento de Engenharia InformáticaIST/UTLLisboaPortugal

Personalised recommendations