Skip to main content

Learning Lexicographic Preference Models

  • Chapter
  • First Online:

Abstract

Lexicographic preference models (LPMs) are one of the simplest yet most commonly used preference representations. In this chapter, we formally define LPMs and present learning algorithms for mining these models from data. In particular, we study a greedy algorithm that produces a “best guess” LPM that is consistent with the observations and two voting-based algorithms that approximate the target using the votes of a collection of consistent LPMs. In addition to our theoretical analyses of these algorithms, we empirically evaluate their performance under different conditions. Our results show that voting algorithms outperform the greedy method when the data is noise-free. The dominance is more significant when the training data is scarce. However, the performance of the voting algorithms quickly decays with even a little noise, whereas the greedy algorithm is more robust. Inspired by this result, we adapt one of the voting methods to consider the amount of noise in an environment and empirically show that the modified voting algorithm performs as well as the greedy approach even with noisy observations. We also introduce an intuitive yet powerful learning bias to prune some of the possible LPMs. We demonstrate how this learning bias can be used with variable and model voting and show that the learning bias improves learning performance significantly, especially when the number of observations is small.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    In our empirical results, we also update the ranks when the prediction was correct but not unanimous. This produces a heuristic speed-up without detracting from the worst case guarantees.

References

  1. D.P. Bertsekas, J.N. Tsitsiklis, Parallel and distributed computation: numerical methods (Athena Scientific, 1997)

    Google Scholar 

  2. C. Boutilier, R.I. Brafman, C. Domshlak, H.H. Hoos, D. Poole, CP-nets: A tool for representing and reasoning with conditional ceteris paribus preference statements. J. Artif. Intell. Res. 21, 135–191 (2004)

    MathSciNet  MATH  Google Scholar 

  3. W.W. Cohen, R.E. Schapire, Y. Singer, Learning to order things. J. Artif. Intell. Res. 10, 243–270 (1999)

    MathSciNet  MATH  Google Scholar 

  4. A.M. Colman, J.A. Stirk, Singleton bias and lexicographic preferences among equally valued alternatives. J. Econ. Behav. Organ. 40(4), 337–351 (1999)

    Article  Google Scholar 

  5. G.B. Dantzig, A. Orden, P. Wolfe, The generalized simplex method for minimizing a linear form under linear inequality restraints. Pac. J. Math. 5, 183–195 (1955)

    Article  MathSciNet  MATH  Google Scholar 

  6. R.M. Dawes, The robust beauty of improper linear models in decision making. Am. Psychol. 34, 571–582 (1979)

    Article  Google Scholar 

  7. J. Dombi, C. Imreh, N. Vincze, Learning lexicographic orders. Eur. J. Oper. Res. 183(2), 748–756 (2007)

    Article  MATH  Google Scholar 

  8. P.C. Fishburn, Lexicographic orders, utilities and decision rules: A survey. Manage. Sci. 20(11), 1442–1471 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  9. J. Kevin Ford, N. Schmitt, S.L. Schechtman, B.M. Hults, M.L. Doherty, Process tracing methods: contributions, problems and neglected research issues. Organ. Behav. Hum. Decis. Process. 43, 75–117 (1989)

    Article  Google Scholar 

  10. A. Quesada, Negative results in the theory of games with lexicographic utilities. Econ. Bull. 3(20), 1–7 (2003)

    Google Scholar 

  11. R.L. Rivest, Learning decision lists. Mach. Learn. 2(3), 229–246 (1987)

    Google Scholar 

  12. M. Schmitt, L. Martignon, On the complexity of learning lexicographic strategies. J. Mach. Learn. Res. 7, 55–83 (2006)

    MathSciNet  MATH  Google Scholar 

  13. L. Torrey, J. Shavlik, T. Walker, R. Maclin, Relational macros for transfer in reinforcement learning, in Proceedings of the Seventeenth Conference on Inductive Logic Programming (Corvallis, Oregon, 2007)

    Google Scholar 

  14. M.R.M. Westenberg, P. Koele, Multi-attribute evaluation processes: methodological and conceptual issues. Acta Psychol. 87, 65–84 (1994)

    Article  Google Scholar 

  15. F. Yaman, T.J Walsh, M. Littman, M. desJardins, Democratic approximation of lexicographic preference models, 2009. Submitted to Artificial Intelligence

    Google Scholar 

  16. F. Yaman, T.J. Walsh, M.L. Littman, M. desJardins, Democratic approximation of lexicographic preference models, in Proceedings of the Twenty-Fifth International Conference (ICML 2008) (Helsinki, Finland, 2008), pp. 1200–1207

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Defense Advanced Research Projects Agency and the U. S. Air Force through BBN Technologies Corp. under contract number FA8650-06-C-7606. Approved for Public Release, Distribution Unlimited.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fusun Yaman .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Yaman, F., Walsh, T.J., Littman, M.L., desJardins, M. (2010). Learning Lexicographic Preference Models. In: Fürnkranz, J., Hüllermeier, E. (eds) Preference Learning. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14125-6_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-14125-6_12

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-14124-9

  • Online ISBN: 978-3-642-14125-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics