Skip to main content

Constraint Learning: An Appetizer

  • Chapter
  • First Online:
Reasoning Web. Explainable Artificial Intelligence

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11810))

Abstract

Constraints are ubiquitous in artificial intelligence and operations research. They appear in logical problems like propositional satisfiability, in discrete problems like constraint satisfaction, and in full-fledged mathematical optimization tasks. Constraint learning enters the picture when the structure or the parameters of the constraint satisfaction/optimization problem to be solved are (partially) unknown and must be inferred from data. The required supervision may come from offline sources or gathered by interacting with human domain experts and decision makers. With these lecture notes, we offer a brief but self-contained introduction to the core concepts of constraint learning, while sampling from the diverse spectrum of constraint learning methods, covering classic strategies and more recent advances. We will also discuss links to other areas of AI and machine learning, including concept learning, learning from queries, structured-output prediction, (statistical) relational learning, preference elicitation, and inverse optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    One should of course keep in mind that many constrained satisfaction/optimization problems can be NP-hard, so obtaining a solution in an acceptable time may still be tricky; see below for some examples.

  2. 2.

    There are other technical assumptions over the distribution of the examples, which will be ignored for simplicity.

  3. 3.

    Notice that the optimal configuration may not be unique, and that all optima have the same score.

  4. 4.

    For technical reasons, the distortion is often assumed to lie in the range [0, 1], see [32].

  5. 5.

    There exist several variants of structured-output SVM, here we opt for the simpler one; see the references for more details.

References

  1. Alur, R., et al.: Syntax-guided synthesis. In: 2013 Formal Methods in Computer-Aided Design, pp. 1–8. IEEE (2013)

    Google Scholar 

  2. Alur, R., Singh, R., Fisman, D., Solar-Lezama, A.: Search-based program synthesis. Commun. ACM 61(12), 84–93 (2018)

    Article  Google Scholar 

  3. Andrews, R., Diederich, J., Tickle, A.B.: Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl.-Based Syst. 8(6), 373–389 (1995)

    Article  Google Scholar 

  4. Angluin, D.: Queries and concept learning. Mach. Learn. 2(4), 319–342 (1988)

    MathSciNet  Google Scholar 

  5. Barrett, C.W., Sebastiani, R., Seshia, S.A., Tinelli, C.: Satisfiability modulo theories. Handb. Satisf. 185, 825–885 (2009)

    Google Scholar 

  6. Bartlett, M., Cussens, J.: Integer linear programming for the Bayesian network structure learning problem. Artif. Intell. 244, 258–271 (2017)

    Article  MathSciNet  Google Scholar 

  7. Beldiceanu, N., Simonis, H.: A model seeker: extracting global constraint models from positive examples. In: Milano, M. (ed.) CP 2012. LNCS, pp. 141–157. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33558-7_13

    Chapter  Google Scholar 

  8. Bessiere, C., Coletta, R., Koriche, F., O’Sullivan, B.: A SAT-based version space algorithm for acquiring constraint satisfaction problems. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 23–34. Springer, Heidelberg (2005). https://doi.org/10.1007/11564096_8

    Chapter  Google Scholar 

  9. Bessiere, C., et al.: New approaches to constraint acquisition. In: Bessiere, C., De Raedt, L., Kotthoff, L., Nijssen, S., O’Sullivan, B., Pedreschi, D. (eds.) Data Mining and Constraint Programming. LNCS (LNAI), vol. 10101, pp. 51–76. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50137-6_3

    Chapter  Google Scholar 

  10. Bistarelli, S., Montanari, U., Rossi, F.: Semiring-based constraint logic programming: syntax and semantics. ACM Trans. Program. Lang. Syst. (TOPLAS) 23(1), 1–29 (2001)

    Article  Google Scholar 

  11. Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. Siam Rev. 60(2), 223–311 (2018)

    Article  MathSciNet  Google Scholar 

  12. Boutilier, C., Regan, K., Viappiani, P.: Simultaneous elicitation of preference features and utility. In: Twenty-Fourth AAAI Conference on Artificial Intelligence (2010)

    Google Scholar 

  13. Bunel, R.R., Turkaslan, I., Torr, P., Kohli, P., Mudigonda, P.K.: A unified view of piecewise linear neural network verification. In: Advances in Neural Information Processing Systems, pp. 4790–4799 (2018)

    Google Scholar 

  14. Daumé III, H., Marcu, D.: Learning as search optimization: approximate large margin methods for structured prediction. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 169–176. ACM (2005)

    Google Scholar 

  15. De Raedt, L., Passerini, A., Teso, S.: Learning constraints from examples. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  16. Dong, C., Chen, Y., Zeng, B.: Generalized inverse optimization through online learning. In: Advances in Neural Information Processing Systems, pp. 86–95 (2018)

    Google Scholar 

  17. Gulwani, S., Hernandez-Orallo, J., Kitzelmann, E., Muggleton, S.H., Schmid, U., Zorn, B.: Inductive programming meets the real world. Commun. ACM 58(11), 90–99 (2015)

    Article  Google Scholar 

  18. Guns, T., Dries, A., Tack, G., Nijssen, S., De Raedt, L.: Miningzinc: a modeling language for constraint-based mining. In: Twenty-Third International Joint Conference on Artificial Intelligence (2013)

    Google Scholar 

  19. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: an update. ACM SIGKDD Explor. Newsl. 11(1), 10–18 (2009)

    Article  Google Scholar 

  20. Hanneke, S., et al.: Theory of disagreement-based active learning. Found. Trends® Mach. Learn. 7(2–3), 131–309 (2014)

    Article  Google Scholar 

  21. Hansen, P., Jaumard, B.: Algorithms for the maximum satisfiability problem. Computing 44(4), 279–303 (1990)

    Article  MathSciNet  Google Scholar 

  22. He, H., Daume III, H., Eisner, J.M.: Learning to search in branch and bound algorithms. In: Advances in Neural Information Processing Systems, pp. 3293–3301 (2014)

    Google Scholar 

  23. Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 133–142. ACM (2002)

    Google Scholar 

  24. Joachims, T., Hofmann, T., Yue, Y., Yu, C.N.: Predicting structured objects with support vector machines. Commun. ACM 52(11), 97 (2009)

    Article  Google Scholar 

  25. King, R.D., et al.: The automation of science. Science 324(5923), 85–89 (2009)

    Article  Google Scholar 

  26. Kolb, S., Paramonov, S., Guns, T., De Raedt, L.: Learning constraints in spreadsheets and tabular data. Mach. Learn. 106, 1–28 (2017)

    Article  MathSciNet  Google Scholar 

  27. Kolb, S., Teso, S., Passerini, A., De Raedt, L.: Learning SMT (LRA) constraints using SMT solvers. In: IJCAI, pp. 2333–2340 (2018)

    Google Scholar 

  28. Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. MIT Press, Cambridge (2009)

    MATH  Google Scholar 

  29. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)

    Article  Google Scholar 

  30. Lombardi, M., Milano, M.: Boosting combinatorial problem modeling with machine learning. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 5472–5478. AAAI Press (2018)

    Google Scholar 

  31. Louche, U., Ralaivola, L.: From cutting planes algorithms to compression schemes and active learning. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2015)

    Google Scholar 

  32. McAllester, D.: Generalization bounds and consistency. In: Predicting Structured Data, pp. 247–261 (2007)

    Google Scholar 

  33. Mitchell, T.M.: Generalization as search. Artif. Intell. 18(2), 203–226 (1982)

    Article  MathSciNet  Google Scholar 

  34. Muggleton, S., De Raedt, L.: Inductive logic programming: theory and methods. J. Logic Program. 19/20, 629–679 (1994)

    Article  MathSciNet  Google Scholar 

  35. O’Sullivan, B.: Automated modelling and solving in constraint programming. In: Twenty-Fourth AAAI Conference on Artificial Intelligence (2010)

    Google Scholar 

  36. Pawlak, T.P., Krawiec, K.: Automatic synthesis of constraints from examples using mixed integer linear programming. Eur. J. Oper. Res. 261(3), 1141–1157 (2017)

    Article  MathSciNet  Google Scholar 

  37. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  38. Perkins, S., Lacker, K., Theiler, J.: Grafting: fast, incremental feature selection by gradient descent in function space. J. Mach. Learn. Res. 3(03), 1333–1356 (2003)

    MathSciNet  MATH  Google Scholar 

  39. Pigozzi, G., Tsoukias, A., Viappiani, P.: Preferences in artificial intelligence. Ann. Math. Artif. Intell. 77(3–4), 361–401 (2016)

    Article  MathSciNet  Google Scholar 

  40. Platt, J.: Sequential minimal optimization: a fast algorithm for training support vector machines (1998)

    Google Scholar 

  41. Rossi, F., Sperduti, A.: Acquiring both constraint and solution preferences in interactive constraint systems. Constraints 9(4), 311–332 (2004)

    Article  Google Scholar 

  42. Rossi, F., Van Beek, P., Walsh, T.: Handbook of Constraint Programming. Elsevier, Amsterdam (2006)

    MATH  Google Scholar 

  43. Scholkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2001)

    Google Scholar 

  44. Sebastiani, R., Tomasi, S.: Optimization modulo theories with linear rational costs. ACM Trans. Comput. Log. (TOCL) 16(2), 12 (2015)

    MathSciNet  MATH  Google Scholar 

  45. Settles, B.: Active learning. Synth. Lect. Artif. Intell. Mach. Learn. 6(1), 1–114 (2012)

    Article  MathSciNet  Google Scholar 

  46. Shalev-Shwartz, S., Singer, Y., Srebro, N., Cotter, A.: Pegasos: primal estimated sub-gradient solver for svm. Math. Program. 127(1), 3–30 (2011)

    Article  MathSciNet  Google Scholar 

  47. Shivaswamy, P., Joachims, T.: Coactive learning. J. Artif. Intell. Res. (JAIR) 53, 1–40 (2015)

    Article  MathSciNet  Google Scholar 

  48. Solar-Lezama, A., Tancau, L., Bodik, R., Seshia, S., Saraswat, V.: Combinatorial sketching for finite programs. ACM Sigplan Not. 41(11), 404–415 (2006)

    Article  Google Scholar 

  49. Sra, S., Nowozin, S., Wright, S.J.: Optimization for Machine Learning. MIT Press, Cambridge (2012)

    Google Scholar 

  50. Teso, S., Dragone, P., Passerini, A.: Coactive critiquing: elicitation of preferences and features. In: AAAI (2017)

    Google Scholar 

  51. Teso, S., Sebastiani, R., Passerini, A.: Structured learning modulo theories. Artif. Intell. 244, 166–187 (2017)

    Article  MathSciNet  Google Scholar 

  52. Todd, M.J.: The many facets of linear programming. Math. Program. 91(3), 417–436 (2002)

    Article  MathSciNet  Google Scholar 

  53. Tsochantaridis, I., Hofmann, T., Joachims, T., Altun, Y.: Support vector machine learning for interdependent and structured output spaces. In: Proceedings of the Twenty-first International Conference on Machine Learning, p. 104. ACM (2004)

    Google Scholar 

  54. Valiant, L.: A theory of the learnable. Commun. ACM 27, 1134–1142 (1984)

    Article  Google Scholar 

  55. Vapnik, V.: An overview of statistical learning theory. IEEE Trans. Neural Netw. 10(5), 988–999 (1999)

    Article  Google Scholar 

Download references

Acknowledgments

The author is grateful to Luc De Raedt and Andrea Passerini for many insightful discussions. These lecture notes are partially based on material co-developed by LDR, AP and the author. This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. [694980] SYNTH: Synthesising Inductive Data Models).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefano Teso .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Teso, S. (2019). Constraint Learning: An Appetizer. In: Krötzsch, M., Stepanova, D. (eds) Reasoning Web. Explainable Artificial Intelligence. Lecture Notes in Computer Science(), vol 11810. Springer, Cham. https://doi.org/10.1007/978-3-030-31423-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-31423-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-31422-4

  • Online ISBN: 978-3-030-31423-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics