Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency

  • Don Howard
  • Ioan MunteanEmail author
Part of the Philosophical Studies Series book series (PSSP, volume 128)


This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to read the behavior of human actors, available as collected data, and to categorize their moral behavior grounded in moral patterns herein. The present model is grounded in several analogies among artificial cognition, human cognition, and moral action. It is premised on the idea that moral agents should not be built on rule-following procedures, but on learning patterns from data. This idea is rarely implemented in AAMA models, albeit it has been suggested in the machine ethics literature (W. Wallach, C. Allen, J. Gips and especially M. Guarini). As an agent-based model, this AAMA constitutes an alternative to the mainstream action-centric models proposed by K. Abney, M. Anderson and S. Anderson, R. Arkin, T. Powers, W. Wallach, i.a. Moral learning and moral development of dispositional traits play here a fundamental role in cognition. By using a combination of neural networks and evolutionary computation, called “soft computing” (H. Adeli, N. Siddique, S. Mitra, L. Zadeh), the present model reaches a certain level of autonomy and complexity, which illustrates well “moral particularism” and a form of virtue ethics for machines, grounded in active learning. An example derived from the “lifeboat metaphor” (G. Hardin) and the extension of this model to the NEAT architecture (K. Stanley, R. Miikkulainen, i.a.) are briefly assessed.


Artificial autonomous moral agent Moral-behavioral patterns Soft computing Neural networks Moral cognition Moral functionalism Lifeboat ethics Moral particularism 



We received constructive comments, helpful feedback and encouraging thoughts at different stages of this project. We want to thank Colin Allen, Anjan Chakravartty, Patrick Foo, Ben Jantzen, Dan Lapsley, Michael Neelon, Tom Powers, Tristram McPherson, Abraham Schwab, Emil Socaciu, Wendell Wallach, an anonymous reviewer for this journal, colleagues from the University of Notre Dame, Indiana Purdue University in Fort Wayne, University of North Carolina, Asheville, and the University of Bucharest, Romania, who supported this project. We want also to express our gratitude to the organizers, referees, and the audience of the following events: “Robo-philosophy 1” conference (Aarhus, Denmark, June 2014); APA Eastern Division meeting (December 2014); APA Central Division meeting (February 2015); the Department of Philosophy at Virginia Tech; the CCEA/ICUB Workshop on artificial morality at the University of Bucharest (June 2015); the IACAP-CEPE-INSEIT joint conference at the University of Delaware (June 2015) and the IACAP conference in Ferrara, Italy (June 2016); the Department of Philosophy at Purdue University; and the “The Ethical and Moral Considerations in Non-Human Agents” (EMCAI) symposium of the AAAI held at Stanford University (March 2016). This paper is complemented by two papers: (Howard and Muntean 2014; Howard and Muntean 2016). The MATLAB code and the data used are available upon request: please contact the corresponding author.


  1. Abney, K., Lin, P., & Bekey, G. (Eds.). (2011). Robot ethics: The ethical and social implications of robotics. Cambridge: The MIT Press.Google Scholar
  2. Adeli, H., & Hung, S.-L. (1994). Machine learning: Neural networks, genetic algorithms, and fuzzy systems (1st ed.). New York: Wiley.Google Scholar
  3. Adeli, H., & Siddique, N. (2013). Computational intelligence: Synergies of fuzzy logic, neural networks intelligent systems and applications. Somerset: Wiley.Google Scholar
  4. Affenzeller, M. (2009). Genetic algorithms and genetic programming: Modern concepts and practical applications. Numerical Insights v. 6. Boca Raton: CRC Press.Google Scholar
  5. Allen, C., & Wallach, W. (2009). Moral machines: Teaching robots right from wrong. Oxford/New York: Oxford University Press.Google Scholar
  6. Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12, 251–261. doi: 10.1080/09528130050111428.CrossRefGoogle Scholar
  7. Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7, 149–155. doi: 10.1007/s10676-006-0004-4.CrossRefGoogle Scholar
  8. Allhoff, F. (2014). Risk, precaution, and nanotechnology. In B. Gordijn & A. Mark Cutter (Eds.), Pursuit of nanoethics (The international library of ethics, law and technology, Vol. 10, pp. 107–130). Dordrecht: Springer.Google Scholar
  9. Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge University Press.Google Scholar
  10. Annas, J. (2011). Intelligent virtue. Oxford: Oxford University Press.CrossRefGoogle Scholar
  11. Arkin, R. (2009). Governing lethal behavior in autonomous robots. Boca Raton: CRC Press.CrossRefGoogle Scholar
  12. Arkin, R. (2013). Lethal autonomous systems and the plight of the non-combatant. AISB Quarterly.Google Scholar
  13. Asimov, I. (1942). Runaround. Astounding Science Fiction, 29, 94–103.Google Scholar
  14. Bello, P., & Bringsjord, S. (2013). On how to build a moral machine. Topoi, 32, 251–266. doi: 10.1007/s11245-012-9129-8.CrossRefGoogle Scholar
  15. Bishop, C. M. (2007). Pattern recognition and machine learning. New York: Springer.Google Scholar
  16. Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22, 71–85. doi: 10.1007/s11023-012-9281-3.CrossRefGoogle Scholar
  17. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.Google Scholar
  18. Brenner, T. (1998). Can evolutionary algorithms describe learning processes? Journal of Evolutionary Economics, 8, 271–283. doi: 10.1007/s001910050064.CrossRefGoogle Scholar
  19. Brink, D. (1989). Moral realism and the foundations of ethics. Cambridge/New York: Cambridge University Press.CrossRefGoogle Scholar
  20. Bueno, O. (2015). Belief systems and partial spaces. Foundations of Science, 21, 225–236. doi: 10.1007/s10699-015-9416-0.CrossRefGoogle Scholar
  21. Bueno, O., French, S., & Ladyman, J. (2002). On representing the relationship between the mathematical and the empirical. Philosophy of Science, 69, 497–518.CrossRefGoogle Scholar
  22. Calude, C. S., & Longo, G. (2016). The deluge of spurious correlations in big data. Foundations of Science, 1–18. doi: 10.1007/s10699-016-9489-4.
  23. Churchland, P. 1996. The neural representation of the social world. In L. May, M. Friedman, & A. Clark (Eds.), Minds and morals (pp. 91–108). Cambridge, MA: MIT Press.Google Scholar
  24. Clark, A. (2000). Making moral space: A reply to Churchland. Canadian Journal of Philosophy, 30, 307–312.CrossRefGoogle Scholar
  25. Clark, A. (2001). Mindware: An introduction to the philosophy of cognitive science. New York: Oxford Univ Pr.Google Scholar
  26. Clarke, S. (2005). Future technologies, dystopic futures and the precautionary principle. Ethics and Information Technology, 7, 121–126. doi: 10.1007/s10676-006-0007-1.CrossRefGoogle Scholar
  27. Coleman, K. G. (2001). Android arete: Toward a virtue ethic for computational agents. Ethics and Information Technology, 3, 247–265. doi: 10.1023/A:1013805017161.CrossRefGoogle Scholar
  28. Colombo, M. (2013). Moving forward (and beyond) the modularity debate: A network perspective. Philosophy of Science, 80, 356–377.CrossRefGoogle Scholar
  29. Crisp, R., & Slote, M A. (1997). Virtue ethics. Oxford readings in philosophy. Oxford/New York: Oxford University Press.Google Scholar
  30. Dancy, J. (2006). Ethics without Principles. Oxford/New York: Oxford University Press.Google Scholar
  31. Danielson, P. (1992). Artificial morality virtuous robots for virtual games. London/New York: Routledge.Google Scholar
  32. Danielson, P. (Ed.). (1998a). Modeling rationality, morality, and evolution. New York: Oxford University Press.Google Scholar
  33. Danielson, P. (1998b). Evolutionary models of co-operative mechanisms: Artificial morality and genetic programming. In P. Danielson (Ed.), Modeling rationality, morality, and evolution. New York: Oxford University Press.Google Scholar
  34. Dawid, H. (2012). Adaptive learning by genetic algorithms: Analytical results and applications to economical models. Berlin: Springer.Google Scholar
  35. De Jong, K. A. (2006). Evolutionary computation. Cambridge: MIT Press: A Bradford Book.Google Scholar
  36. DeMoss, D. (1998). Aristotle, connectionism, and the morally excellent brain. In 20th WCP proceedings. Boston: Paideia Online Project.Google Scholar
  37. Dewey, D. (2011). Learning what to value. In J. Schmidhuber & K. Thórisson (Eds.), Artificial General Intelligence. 4th International Conference, AGI 2011 Mountain view, CA, USA, August 3–6, 2011 Proceedings (pp. 309–314). Springer Berlin Heidelberg.Google Scholar
  38. Doris, J. M. (1998). Persons, situations, and virtue ethics. Noûs, 32, 504–530. doi: 10.1111/0029-4624.00136.CrossRefGoogle Scholar
  39. Enemark, C. (2014). Armed drones and the ethics of war: Military virtue in a post-heroic age (War, conduct and ethics). London: Routledge.Google Scholar
  40. Evins, R., Vaidyanathan, R., & Burgess, S. (2014). Multi-material compositional pattern-producing networks for form optimisation. In A. I. Esparcia-Alcázar & A. M. Mora (Eds.), Applications of evolutionary computation. Berlin/Heidelberg: Springer.Google Scholar
  41. Flanagan, O. J. (2007). The really hard problem: Meaning in a material world. Cambridge: MIT Press.Google Scholar
  42. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds & Machines, 14, 349–379.CrossRefGoogle Scholar
  43. Franklin, S., & Graesser, A. (1997). Is it an agent, or just a program? a taxonomy for autonomous agents. In J. P. Müller, M. J. Wooldridge, & N. R. Jennings (Eds.), Intelligent agents III agent theories, architectures, and languages (pp. 21–35). Springer Berlin Heidelberg.Google Scholar
  44. Galliott, J. (2015). Military robots: Mapping the moral landscape. Surrey: Ashgate Publishing Ltd.Google Scholar
  45. Gärdenfors, P. (2000). Conceptual spaces the geometry of thought. A Bradford Book. Cambridge: MIT Press.Google Scholar
  46. Gauthier, D. (1987). Morals by agreement. Oxford: Oxford University Press.Google Scholar
  47. Gips, J. (1995). Towards the ethical robot. In K. M. Ford, C. N. Glymour, & P. J. Hayes (Eds.), Android epistemology. Menlo Park: AAAI Press/MIT Press.Google Scholar
  48. Goertzel, B., & Pennachin, C. (2007). Artificial general intelligence (Vol. 2). Berlin: Springer.CrossRefGoogle Scholar
  49. Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23, 101–124. doi: 10.1080/1047840X.2012.651387.CrossRefGoogle Scholar
  50. Guarini, M. (2006). Particularism and the classification and reclassification of moral cases. IEEE Intelligent Systems, 21, 22–28. doi: 10.1109/MIS.2006.76.CrossRefGoogle Scholar
  51. Guarini, M. (2011). Computational neural modeling and the philosophy of ethics reflections on the particularism-generalism debate. In M. Anderson & S. L. Anderson (Eds.), Machine Ethics. Cambridge: Cambridge University Press.Google Scholar
  52. Guarini, M. (2012). Conative dimensions of machine ethics: A defense of duty. IEEE Transactions on Affective Computing, 3, 434–442. doi: 10.1109/T-AFFC.2012.27.CrossRefGoogle Scholar
  53. Guarini, M. (2013a). Case classification, similarities, spaces of reasons, and coherences. In: M. Araszkiewicz M, & J. Šavelka (Eds.), Coherence: Insights from philosophy, jurisprudence and artificial intelligence. Springer, pp 187–201Google Scholar
  54. Guarini, M. (2013b). Moral case classification and the nonlocality of reasons. Topoi, 32, 267–289. doi: 10.1007/s11245-012-9130-2
  55. Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. Cambridge: MIT Press.Google Scholar
  56. Hardin, G. J. (1974). Lifeboat ethics. Bioscience, 24, 361–368.Google Scholar
  57. Hoffman, M. (1991). Empathy, social cognition, and moral action. In W. M. Kurtines, J. Gewirtz, & J. L. Lamb (Eds.), Handbook of moral behavior and development: Volume 1: Theory. Hoboken: Psychology Press.Google Scholar
  58. Holland, J. H. (1975). Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence. (2nd ed.). Bradford Books, 1992. University of Michigan Press.Google Scholar
  59. Horgan, T., & Timmons, M. (2009). Analytical moral functionalism meets moral twin earth. In I. Ravenscroft (Ed.), Minds, ethics, and conditionals. Oxford: Oxford University Press.Google Scholar
  60. Howard, D., & Muntean, Ioan. (2014). Artificial moral agents: Creative, autonomous, social. An approach based on evolutionary computation. In Proceedings of Robo-Philosophy. Frontiers of AI and Applications. Amsterdam: IOS Press.Google Scholar
  61. Howard, D., & Muntean, I. (2016). A minimalist model of the artificial autonomous moral agent (AAMA). The 2016 AAAI spring symposium series SS-16-04: Ethical and moral considerations in non-human agents. The Association for the Advancement of Artificial Intelligence (pp. 217–225).Google Scholar
  62. Human Rights Watch. (2013). US: Ban fully autonomous weapons. Human Rights Watch.Google Scholar
  63. Hursthouse, R. (1999). On virtue ethics. Oxford/New York: Oxford University Press.Google Scholar
  64. Jackson, F. (1998). From metaphysics to ethics a defence of conceptual analysis. Oxford/New York: Clarendon Press.Google Scholar
  65. Jackson, F., & Pettit, P. (1995). Moral functionalism and moral motivation. The Philosophical Quarterly, 45, 20–40. doi: 10.2307/2219846.CrossRefGoogle Scholar
  66. Johnson, M. (2012). There is no moral faculty. Philosophical Psychology, 25, 409–432.CrossRefGoogle Scholar
  67. Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.Google Scholar
  68. Kitcher, P. (2011). The ethical project. Cambridge: Harvard University Press.CrossRefGoogle Scholar
  69. Koza, J. R. (1992). Genetic programming: On the programming of computers by means of natural Selection. Cambridge: MIT Press.Google Scholar
  70. Kuorikoski, J., & Pöyhönen, S. (2013). Understanding nonmodular functionality: Lessons from genetic algorithms. Philosophy of Science, 80, 637–649. doi: 10.1086/673866.CrossRefGoogle Scholar
  71. Ladyman, J., Lambert, J., & Wiesner, K. (2012). What is a complex system? European Journal for Philosophy of Science, 3, 33–67. doi: 10.1007/s13194-012-0056-8.CrossRefGoogle Scholar
  72. Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines: Journal for Artificial Intelligence, Philosophy, and Cognitive Science, 17, 391–444.CrossRefGoogle Scholar
  73. Lewis, D. K. (1983). Philosophical papers. New York: Oxford University Press.Google Scholar
  74. Litt, A., Eliasmith, C., & Thagard, P. (2008). Neural affective decision theory: Choices, brains, and emotions. Cognitive Systems Research, 9, 252–273. doi: 10.1016/j.cogsys.2007.11.001.CrossRefGoogle Scholar
  75. Liu, H., Gegov, A., & Cocea, M. (2016). Rule based systems for big data (Studies in big data. Vol. 13). Cham: Springer International Publishing.Google Scholar
  76. McDowell, J. (1979). Virtue and reason. The Monist, 62, 331–350.CrossRefGoogle Scholar
  77. Michael, J. (2014). Towards a consensus about the role of empathy in interpersonal understanding. Topoi, 33, 157–172. doi: 10.1007/s11245-013-9204-9.CrossRefGoogle Scholar
  78. Mitchell, T. M. (1997). Machine learning (1st ed.). New York: McGraw-Hill Science/Engineering/Math.Google Scholar
  79. Mitchell, S. D. (2012). Unsimple truths: Science, complexity, and policy. Reprint edition. Chicago: University Of Chicago Press.Google Scholar
  80. Mitra, S., Das, R., & Hayashi, Y. (2011). Genetic networks and soft computing. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 8, 94–107. doi: 10.1109/TCBB.2009.39.CrossRefGoogle Scholar
  81. Monsó, S. (2015). Empathy and morality in behaviour readers. Biology and Philosophy, 30, 671–690. doi: 10.1007/s10539-015-9495-x.CrossRefGoogle Scholar
  82. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. Intelligent Systems, IEEE, 21, 18–21.CrossRefGoogle Scholar
  83. Muntean, I. (2014). Computation and scientific discovery? a bio-inspired approach. In H. Sayama, J. Reiffel, S. Risi, R. Doursat, & H. Lipson (Eds.), Artificial Life 14. Proceedings of the fourteenth international conference on the synthesis and simulation of living systems. New York: The MIT Press.Google Scholar
  84. Murphy, K. P. (2012). Machine learning a probabilistic perspective. Cambridge: MIT Press.Google Scholar
  85. Nichols, S. (2001). Mindreading and the cognitive architecture underlying altruistic motivation. Mind & Language, 16, 425–455. doi: 10.1111/1468-0017.00178.CrossRefGoogle Scholar
  86. Nickles, T. (2009). The strange story of scientific method. In J. Meheus & T. Nickles (Eds.), Models of discovery and creativity (1st ed., pp. 167–208). Springer.Google Scholar
  87. Nussbaum, M. C. (1986). The fragility of goodness: luck and ethics in Greek tragedy and philosophy. Cambridge: Cambridge University Press.Google Scholar
  88. Pereira, L. M., & Saptawijaya, A. (2016). Programming machine ethics Springer International Publishing.Google Scholar
  89. Railton, P. (1986). Moral realism. The Philosophical Review, 95, 163–207. doi: 10.2307/2185589.CrossRefGoogle Scholar
  90. Rawls, J. (1958). Justice as fairness. The Philosophical Review, 67, 164–194. doi: 10.2307/2182612.CrossRefGoogle Scholar
  91. Regan, T. (1983). The case for animal rights. Berkeley: University of California Press.Google Scholar
  92. Richards, D. (2014). Evolving morphologies with CPPN-NEAT and a dynamic substrate. In ALIFE 14: Proceedings of the fourteenth international conference on the synthesis and simulation of living systems (pp. 255–262). New York: The MIT Press. doi: 10.7551/978-0-262-32621-6-ch042.
  93. Robinson, Z., Maley, C. J., & Piccinini, G. (2015). Is consciousness a spandrel? Journal of the American Philosophical Association, 1, 365–383. doi: 10.1017/apa.2014.10.CrossRefGoogle Scholar
  94. Russell, D. C., & Miller, C. B. (2015). How are virtues acquired? In M. Alfano (Ed.), Current controversies in virtue theory (1st ed.). New York: Routledge.Google Scholar
  95. Savulescu, J., & Maslen, H. (2015). Moral enhancement and artificial intelligence: Moral AI? In J. Romportl, E. Zackova & J. Kelemen (Eds.), Beyond artificial intelligence. Springer International Publishing.Google Scholar
  96. Schmidt, M., & Lipson, H. (2009). Distilling free-form natural laws from experimental data. Science, 324, 81–85. doi: 10.1126/science.1165893.CrossRefGoogle Scholar
  97. Shalizi, C. R., & Crutchfield, J. P. (2001). Computational mechanics: Pattern and prediction, structure and simplicity. Journal of Statistical Physics, 104, 817–879.CrossRefGoogle Scholar
  98. Sharkey, N. E. (2012). The evitability of autonomous robot warfare. International Review of the Red Cross, 94, 787–799. doi: 10.1017/S1816383112000732.CrossRefGoogle Scholar
  99. Sidgwick, H. (1930). The methods of ethics. London: Macmillan and Co, Ltd.Google Scholar
  100. Sørensen, M. H. (2004). The genealogy of biomimetics: Half a century’s quest for dynamic IT. In A. J. Ijspeert, M. Murata & N. Wakamiya (eds.) Biologically inspired approaches to advanced information technology. Lausanne/Berlin/New York: Springer.Google Scholar
  101. Stanley, K. O., & Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. Evolutionary Computation, 10, 99–127. doi: 10.1162/106365602320169811.CrossRefGoogle Scholar
  102. Stanley, K. O., D’Ambrosio, D. B., & Gauci, J. (2009). A hypercube-based encoding for evolving large-scale neural networks. Artificial Life, 15, 185–212.CrossRefGoogle Scholar
  103. Strawser, B. J. (2013). Guest editor’s introduction the ethical debate over cyberwar. Journal of Military Ethics, 12, 1–3. doi: 10.1080/15027570.2013.782639.CrossRefGoogle Scholar
  104. Strevens, M. (2008). Depth: An account of scientific explanation. Cambridge: Harvard University Press.Google Scholar
  105. Suárez, M., & Cartwright, N. (2008). Theories: Tools versus models. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 39, 62–81.CrossRefGoogle Scholar
  106. Sun, R. (2013). Moral judgment, human motivation, and neural networks. Cognitive Computation, 5, 566–579. doi: 10.1007/s12559-012-9181-0.CrossRefGoogle Scholar
  107. Suthaharan, S. (2016). Machine learning models and algorithms for big data classification. Boston: Springer US.Google Scholar
  108. Swanton, C. (2003). Virtue ethics. Oxford: Oxford University Press.CrossRefGoogle Scholar
  109. Tomassini, M. (1995). A survey of genetic algorithms. Annual Reviews of Computational Physics, 3, 87–118.CrossRefGoogle Scholar
  110. Tonkens, R. (2009). A challenge for machine ethics. Minds & Machines, 19, 421–438. doi: 10.1007/s11023-009-9159-1.CrossRefGoogle Scholar
  111. Tonkens, R. (2012). Out of character: On the creation of virtuous machines. Ethics and Information Technology, 14, 137–149. doi: 10.1007/s10676-012-9290-1.CrossRefGoogle Scholar
  112. Trappl, R., (Ed). (2013). Your virtual butler. Berlin/Heidelberg: Springer.Google Scholar
  113. Trappl, R. (2015). A construction manual for Robots’ ethical systems: Requirements, methods, implementations. Cham: Springer.CrossRefGoogle Scholar
  114. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.Google Scholar
  115. Turing, A. (1992). Mechanical Intelligence. In D. C. Ince (Ed.), The collected works of A. M. Turing: Mechanical intelligence. Amsterdam/New York: North-Holland.Google Scholar
  116. Wallach, W.. (2014). Ethics, law, and governance in the development of robots. In R. Sandler (Ed.), Ethics and emerging technologies (pp. 363–379). New York: PagraveGoogle Scholar
  117. Wallach, W. (2015). A dangerous master: How to keep technology from slipping beyond our control. New York: Basic Books.Google Scholar
  118. Wallach, W., Franklin, S., & Allen, C. (2010). A conceptual and computational model of moral decision making in human and artificial agents. Topics in Cognitive Science, 2, 454–485. doi: 10.1111/j.1756-8765.2010.01095.x.CrossRefGoogle Scholar
  119. Watson, R. A., & Szathmáry, E. (2016). How can evolution learn? Trends in Ecology & Evolution. doi: 10.1016/j.tree.2015.11.009.
  120. Wingspread participants. (1998). Wingspread statement on the precautionary principle.Google Scholar
  121. Yao, X. (1999). Evolving artificial neural networks. Proceedings of the IEEE, 87, 1423–1447. doi: 10.1109/5.784219.CrossRefGoogle Scholar
  122. Zadeh, L. A. (1994). Fuzzy logic, neural networks, and soft computing. Communications of the ACM, 37, 77–84. doi: 10.1145/175247.175255.CrossRefGoogle Scholar
  123. Zangwill, N. (2000). Against analytic moral functionalism. Ratio: An International Journal of Analytic Philosophy, 13, 275–286.CrossRefGoogle Scholar
  124. Zenker, F., & Gärdenfors, P. (Eds.). (2015). Applications of conceptual spaces. Cham: Springer.Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.The Reilly Center for Science, Technology, and ValuesUniversity of Notre DameNotre DameUSA
  2. 2.Master of Liberal Arts and Sciences ProgramUniversity of North CarolinaAshevilleUSA

Personalised recommendations