Advertisement

Classification with Positive and Negative Equivalence Constraints: Theory, Computation and Human Experiments

  • Rubi Hammer
  • Tomer Hertz
  • Shaul Hochstein
  • Daphna Weinshall
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4729)

Abstract

We tested the efficiency of category learning when participants are provided only with pairs of objects, known to belong either to the same class (Positive Equivalence Constraints or PECs) or to different classes (Negative Equivalence Constraints or NECs). Our results in a series of cognitive experiments show dramatic differences in the usability of these two information building blocks, even when they are chosen to contain the same amount of information. Specifically, PECs seem to be used intuitively and quite efficiently, while people are rarely able to gain much information from NECs (unless they are specifically directed for the best way of using them). Tests with a constrained EM clustering algorithm under similar conditions also show superior performance with PECs. We conclude with a theoretical analysis, showing (by analogy to graph cut problems) that the satisfaction of NECs is computationally intractable, whereas the satisfaction of PECs is straightforward. Furthermore, we show that PECs convey more information than NECs by relating their information content to the number of different graph colorings.  These inherent differences between PECs and NECs may explain why people readily use PECs, while many of them need specific directions to be able to use NECs effectively.

Keywords

Categorization Similarity Rule learning Expectation Maximization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ashby, F.G., Maddox, W.T.: Human category learning. Ann. Rev. Psychology 56, 149–178 (2005)CrossRefGoogle Scholar
  2. 2.
    Bar-Hilel, A., Hertz, T., Shental, N., Weinshall, D.: Learning distance functions using equivalence relations. In: The 20th International Conference on Machine Learning (2003)Google Scholar
  3. 3.
    Bilenko, M., Basu, S., Mooney, R.J.: Integrating constraints and metric learning in semi-supervised clustering. In: Banff Canada, AAAI press, Stanford, California, USA (2004)Google Scholar
  4. 4.
    Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford press, Oxford (1995)Google Scholar
  5. 5.
    Cohen, A.L., Nosofsky, R.M.: An exemplar-retrieval model of speeded same-different judgments. Journal of Experimental Psychology: Human Perception and Performance 26, 1549–1569 (2000)CrossRefGoogle Scholar
  6. 6.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. John Wiley and Sons Inc., Chichester (2001)zbMATHGoogle Scholar
  7. 7.
    Goldstone, R.L.: Influences of categorization on perceptual discrimination. Journal of Experimental Psychology: General 123(2), 178–200 (1994)CrossRefGoogle Scholar
  8. 8.
    Grier, J.B.: Nonparametric indexes for sensetivity and bias: Computing formulas. Psychological Bulletin 75, 424–429 (1971)CrossRefGoogle Scholar
  9. 9.
    Hammer, R., Hertz, T., Hochstein, S., Weinshall, D.: Category learning from equivalence constraints. Cognitive Processing (in press)Google Scholar
  10. 10.
    Hertz, T., Bar-Hillel, A., Weinshall, D.: Boosting margin based distance functions for clustering. In: ICML (2004)Google Scholar
  11. 11.
    Khanna, S., Linial, N., Safra, S.: On the hardness of approximating the chromatic number. Combinatorica 1(3), 393–415 (2000)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Klein, D., Kamvar, S., Manning, C.: From instance-level constraints to space-level constraints: Making the most of prior knowledge in data clustering. In: Proceedings of the Nineteenth International Conference on Machine Learning (2002)Google Scholar
  13. 13.
    Krivelevich, M.: Sparse graphs usually have exponentially many optimal colorings. Electronic Journal of Combinatorics 9 (2002)Google Scholar
  14. 14.
    Shental, N., Bar-Hilel, A., Hertz, T., Weinshall, D.: Computing Gaussian mixture models with EM using equivalence constraints. In: Advances in Neural Information Processing Systems, vol. 16, MIT Press, Cambridge (2004)Google Scholar
  15. 15.
    Stanislaw, H., Todorov, N.: Calculating signal detection theory measures. Behavior Research Methods, Instruments, & Computers 31(1), 137–149 (1999)Google Scholar
  16. 16.
    Wagstaff, K., Cardie, C., Rogers, S., Schroedl, S.: Constrained K-means clustering with background knowledge. In: Proc. 18th International Conf. on Machine Learning, pp. 577–584. Morgan Kaufmann, San Francisco, CA (2001)Google Scholar
  17. 17.
    Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learnign with application to clustering with side-information. In: Advances in Neural Information Processing Systems, vol. 15, MIT Press, Cambridge (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Rubi Hammer
    • 1
    • 2
  • Tomer Hertz
    • 1
    • 3
  • Shaul Hochstein
    • 1
    • 2
  • Daphna Weinshall
    • 1
    • 3
  1. 1.Interdisciplinary Center for Neural Computation 
  2. 2.Neurobiology Department, Institute of Life Sciences 
  3. 3.School of Computer Sciences and Engineering, The Hebrew University of Jerusalem, Jerusalem, 91904Israel

Personalised recommendations