Skip to main content

Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency

  • Chapter
  • First Online:
Philosophy and Computing

Part of the book series: Philosophical Studies Series ((PSSP,volume 128))

Abstract

This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to read the behavior of human actors, available as collected data, and to categorize their moral behavior grounded in moral patterns herein. The present model is grounded in several analogies among artificial cognition, human cognition, and moral action. It is premised on the idea that moral agents should not be built on rule-following procedures, but on learning patterns from data. This idea is rarely implemented in AAMA models, albeit it has been suggested in the machine ethics literature (W. Wallach, C. Allen, J. Gips and especially M. Guarini). As an agent-based model, this AAMA constitutes an alternative to the mainstream action-centric models proposed by K. Abney, M. Anderson and S. Anderson, R. Arkin, T. Powers, W. Wallach, i.a. Moral learning and moral development of dispositional traits play here a fundamental role in cognition. By using a combination of neural networks and evolutionary computation, called “soft computing” (H. Adeli, N. Siddique, S. Mitra, L. Zadeh), the present model reaches a certain level of autonomy and complexity, which illustrates well “moral particularism” and a form of virtue ethics for machines, grounded in active learning. An example derived from the “lifeboat metaphor” (G. Hardin) and the extension of this model to the NEAT architecture (K. Stanley, R. Miikkulainen, i.a.) are briefly assessed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    One can assume that moral agents can be of a human nature, without being individuals: groups, companies, or any decision-making structure (e.g. the military line of command) or possess individuality, without being humans: animals, artificial agents, supranatural beings, etc.

  2. 2.

    There are two areas of research germane to the present analysis: “ethics of emergent technologies” and “machine ethics.” The former has focused mainly on the moral implications, both intrinsic and extrinsic, of the dissemination of technologies in the society. In most cases, the ethical responsibility belongs to humans (or to human groups or communities): they are the moral decision-makers and the responsibility bearers: e.g. the field of “robo-ethics” focuses upon the use of robots and the societal concerns in the deployment of robots as well as the impact on our system of values (Wallach 2014). The ethics of technology has to be complemented with another, more balanced human-machine interaction, which has become relevant relatively recently: the ethics of the actions and decisions of machines, when they interact autonomously, mainly with humans. This relatively new area of study is “machine ethics,” which raises new interesting ethical issues, beyond the problems of emergent technologies (Allen and Wallach 2009; Abney et al. 2011; Anderson and Anderson 2011; Gunkel 2012; Trappl 2013, 2015; Wallach 2014; Pereira and Saptawijaya 2016). It enquires the ethics of our relation with technology, when humans are not the sole moral agents.

  3. 3.

    We prefer to talk here about complexity as a measure of machine’s interaction with humans, the environment, etc. and not as its a feature per se. The problems with technology, Wallach writes, “often arise out of the interaction of those components [specific technology, people, institutions, environment, etc.] with other elements of the sociotechnical system” (Wallach 2015, 34).

  4. 4.

    See the proposal for banning fully autonomous weapons by the Human Rights Watch (Human Rights Watch 2013).

  5. 5.

    The literature refers to the autonomous moral agent and the artificial moral agent under the acronym AMA. AAMA refers here to the “artificial autonomous moral agent.”

  6. 6.

    For a comprehensive formal definition, albeit somehow dated, see (Franklin and Graesser 1997).

  7. 7.

    This model does not explore the unsupervised learning, deep learning, the reinforcement moral learning, which are all equally promising. Supervised learning is less exploratory in nature, and more related to discovery of patterns in data than to exploring data. See recent developments on machine learning in Big Data (Bishop 2007; Murphy 2012; Liu et al. 2016; Suthaharan 2016).

  8. 8.

    Aristotle, Thomas Aquinas, Descartes, Pascal, Kant, and some contractarians such as Hobbes, Gauthier and Rawls talked about the morality of non-human, or non-individual agents. Probably a threshold concept is the idea of “just institutions,” upheld by several contractarians during the Enlightenment period, who showed why sometimes social structures can be “just” or “fair.” When Rawls introduced the two principles of justice, he added that the term “person” on some occasions means “human individuals,” but may refer also to “nations, provinces, business firms, churches, teams, and so on. The principles of justice apply in all these instances, although there is a certain logical priority to the case of human individuals” (Rawls 1958, 166). Rawls, Gauthier, and, as we’ll see, Danielson, require from the moral agents to be rational. We have in mind here the “constrained maximization theory,” where to be a rational agent means that the moral constraint must be conditional upon others’ co-operation (Gauthier 1987).

  9. 9.

    A version of the precautionary principle is: “Where an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically” (Wingspread participants 1998). See a discussion on precautionary principle about nanotechnology in Clarke (2005) and Allhoff (2014).

  10. 10.

    Some would simply replace mind-reading with a weaker condition such as empathy or sympathy. (Hoffman 1991; Nichols 2001).

  11. 11.

    The behavioral-reading agents include animals, non-individual agents and, as we argue here, some AAMAs.

  12. 12.

    The concepts of “moral judgment,” “moral reasoning,” and moreover “moral justification” are harder to capture by any AAMA model. One requirement of AAMA is to justify its actions by reconstructing its decisions. Justification can be a posteriori rationalization of moral actions.

  13. 13.

    In many areas of science, models use a “functional stratification” in which the explanatory power of a model operates at one level only.

    Investigation at one level “is black-boxing” everything that happens at lower levels. The lower levels are reduced to a functional specification. See M. Strevens for a comprehensive discussion on “boxing” (black-boxing and gray-boxing) in science (2008).

  14. 14.

    This paragraph is inspired by some debates for moral realism and moral naturalism (Railton 1986; Brink 1989).

  15. 15.

    Danielson talks about moral functionalism, but because the central role rationality plays in this argument, we are using here the name designate his view as “rational moral functionalism.”

  16. 16.

    Here is a definition of this type of supervenience: “for any two worlds that are descriptively exactly alike in other respects, in the attitudes that people have, in the effects to which actions lead, and so on, if x in one is descriptively exactly like y in the other, then x and y are exactly alike in moral respects” (Jackson and Pettit 1995, 22).

  17. 17.

    Moral properties are role-filling properties and, in analytical moral functionalism, moral terms are similar to theoretical terms in D. Lewis’ sense (Lewis 1983).

  18. 18.

    This artificial morality is responsive. Moral agents “must limit the class of agents with which they share cooperative benefits” to other rational agents. The adaptive moral agents in Danielson can include corporations, artificial agents, institutions, as far as they display a moralized rational interaction. He also proves that moral constraints (e.g., “keeping a promise”) are rational and can be implemented in artificial agents.

  19. 19.

    Another way to put it is to compare egoism with utilitarianism, which are both rational, and to conclude that rationality alone cannot be a complete guide to morality. This is, in Sidgwick’s words, the “dualism of practical reason” and constitutes for some the “profoundest problem of ethics” that affects especially utilitarianism (Sidgwick 1930, 386). Jackson talks about naïve and mature naïve folk morality: the latter stage is obtained after one weeds out inconsistencies and counterintuitive parts of the former (Jackson 1998, 133).

  20. 20.

    Moral functionalism per se is compatible with the main directions in ethics. There is a rule-based functionalism, a utilitarian functionalism, a “rights” functionalism. For example, when one postulates a strong deterministic connection between the input and the output, with no exceptions, then a “rule functionalism” is at work.

  21. 21.

    We assume here that models can be driven by theories, and not that models can be derived from theories, a hypothesis advanced in the literature about the autonomy or independence relation among models and theories. See the relevant debate in (Bueno et al. 2002; Suárez and Cartwright 2008).

  22. 22.

    The analogy between the moral development of a human agent and the general process of learning, developing skills, efficient finding of the optimal solution, or choosing between different alternatives, is a convoluted topic: a deep analysis of each corresponding analogies is left for another occasion.

  23. 23.

    Both skills and virtues are practical abilities, and, in line with Aristotle’s ethics, “exercising a virtue involves practical reasoning of a kind that can illuminatingly be compared to the kind of reasoning we find in someone exercising a practical skill” (Annas 2011, 16).

  24. 24.

    This section reiterates the lines of argument presented by D. Howard and I. Muntean in a previous work (2014).

  25. 25.

    For the hard/soft distinction in computing, see Sect. 7.8 below.

  26. 26.

    See other approaches which partially adopt some of these hypotheses: (Gips 1995; Danielson 1998b; Coleman 2001; Guarini 2006).

  27. 27.

    We use here the intuition of fragmented space in relation to the mathematical concept of separability.

  28. 28.

    This whole approach can be integrated in the framework of “conceptual space models” advanced by P. Gärdenfors and collaborators (Gärdenfors 2000; Bueno 2015; Zenker and Gärdenfors 2015).

  29. 29.

    A single-layer network is composed of one input layer of input units and one layer of output units such that all output units are directly connected to the input units. A multi-layer network has hidden layers that interface the input units and the output units such that output units are not directly connected to input units, but only to hidden units.

  30. 30.

    The approach that uses partially H-2 and explicitly H-3 is (Guarini 2006). Guarini trained a number of artificial neural networks on a set of problems about “X killed Y in this and this circumstances” and managed to infer (predict) moral behaviors for another set of test cases. Guarini’s conclusion runs somehow against moral particularism, because some type of moral principles is needed (including some “contributory principles”), but it also shows that particularism is stronger than it seems. The simple recursive artificial neural network makes decisions without exceptionless moral principles, although it needs some “contributory” principles, which play a central role, Guarini argues, in moral reclassification.

  31. 31.

    Another supportive schema is to take EC as fundamental and let a NN generate the initial population of the EC (the EC + NN). The “collaborative combination” is when EC and NN work in the same time to solve a problem.

  32. 32.

    Some constraints such as number of hidden layers, simplicity, speed etc., can be added as conditions of the evolutionary process (Yao 1999).

  33. 33.

    A rule-based system can include the constraints (equalities or inequalities) among the input vector.

  34. 34.

    The intuition here is that factually impossible situations leave the agent with no options to choose from.

  35. 35.

    In a more evolved model, vector Ψ can encode frames or mental states of the agent. In the lifeboat example, it is used to flag out the situations in which the intention is bribery, or personal interest in saving a specific person, to obtain material possessions of the passengers and disregard their lives, etc.

  36. 36.

    The intentions of H as an actor in the model, whose intentions are merely input data, are not mapped on any internal functions of the AAMA: they are just data, with no special status.

  37. 37.

    For more information about the NEAT and its versions developed at the University of Texas at Austin, see: http://nn.cs.utexas.edu/?neat.

References

  • Abney, K., Lin, P., & Bekey, G. (Eds.). (2011). Robot ethics: The ethical and social implications of robotics. Cambridge: The MIT Press.

    Google Scholar 

  • Adeli, H., & Hung, S.-L. (1994). Machine learning: Neural networks, genetic algorithms, and fuzzy systems (1st ed.). New York: Wiley.

    Google Scholar 

  • Adeli, H., & Siddique, N. (2013). Computational intelligence: Synergies of fuzzy logic, neural networks intelligent systems and applications. Somerset: Wiley.

    Google Scholar 

  • Affenzeller, M. (2009). Genetic algorithms and genetic programming: Modern concepts and practical applications. Numerical Insights v. 6. Boca Raton: CRC Press.

    Google Scholar 

  • Allen, C., & Wallach, W. (2009). Moral machines: Teaching robots right from wrong. Oxford/New York: Oxford University Press.

    Google Scholar 

  • Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12, 251–261. doi:10.1080/09528130050111428.

    Article  Google Scholar 

  • Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7, 149–155. doi:10.1007/s10676-006-0004-4.

    Article  Google Scholar 

  • Allhoff, F. (2014). Risk, precaution, and nanotechnology. In B. Gordijn & A. Mark Cutter (Eds.), Pursuit of nanoethics (The international library of ethics, law and technology, Vol. 10, pp. 107–130). Dordrecht: Springer.

    Google Scholar 

  • Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge University Press.

    Google Scholar 

  • Annas, J. (2011). Intelligent virtue. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Arkin, R. (2009). Governing lethal behavior in autonomous robots. Boca Raton: CRC Press.

    Book  Google Scholar 

  • Arkin, R. (2013). Lethal autonomous systems and the plight of the non-combatant. AISB Quarterly.

    Google Scholar 

  • Asimov, I. (1942). Runaround. Astounding Science Fiction, 29, 94–103.

    Google Scholar 

  • Bello, P., & Bringsjord, S. (2013). On how to build a moral machine. Topoi, 32, 251–266. doi:10.1007/s11245-012-9129-8.

    Article  Google Scholar 

  • Bishop, C. M. (2007). Pattern recognition and machine learning. New York: Springer.

    Google Scholar 

  • Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22, 71–85. doi:10.1007/s11023-012-9281-3.

    Article  Google Scholar 

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.

    Google Scholar 

  • Brenner, T. (1998). Can evolutionary algorithms describe learning processes? Journal of Evolutionary Economics, 8, 271–283. doi:10.1007/s001910050064.

    Article  Google Scholar 

  • Brink, D. (1989). Moral realism and the foundations of ethics. Cambridge/New York: Cambridge University Press.

    Book  Google Scholar 

  • Bueno, O. (2015). Belief systems and partial spaces. Foundations of Science, 21, 225–236. doi:10.1007/s10699-015-9416-0.

    Article  Google Scholar 

  • Bueno, O., French, S., & Ladyman, J. (2002). On representing the relationship between the mathematical and the empirical. Philosophy of Science, 69, 497–518.

    Article  Google Scholar 

  • Calude, C. S., & Longo, G. (2016). The deluge of spurious correlations in big data. Foundations of Science, 1–18. doi:10.1007/s10699-016-9489-4.

  • Churchland, P. 1996. The neural representation of the social world. In L. May, M. Friedman, & A. Clark (Eds.), Minds and morals (pp. 91–108). Cambridge, MA: MIT Press.

    Google Scholar 

  • Clark, A. (2000). Making moral space: A reply to Churchland. Canadian Journal of Philosophy, 30, 307–312.

    Article  Google Scholar 

  • Clark, A. (2001). Mindware: An introduction to the philosophy of cognitive science. New York: Oxford Univ Pr.

    Google Scholar 

  • Clarke, S. (2005). Future technologies, dystopic futures and the precautionary principle. Ethics and Information Technology, 7, 121–126. doi:10.1007/s10676-006-0007-1.

    Article  Google Scholar 

  • Coleman, K. G. (2001). Android arete: Toward a virtue ethic for computational agents. Ethics and Information Technology, 3, 247–265. doi:10.1023/A:1013805017161.

    Article  Google Scholar 

  • Colombo, M. (2013). Moving forward (and beyond) the modularity debate: A network perspective. Philosophy of Science, 80, 356–377.

    Article  Google Scholar 

  • Crisp, R., & Slote, M A. (1997). Virtue ethics. Oxford readings in philosophy. Oxford/New York: Oxford University Press.

    Google Scholar 

  • Dancy, J. (2006). Ethics without Principles. Oxford/New York: Oxford University Press.

    Google Scholar 

  • Danielson, P. (1992). Artificial morality virtuous robots for virtual games. London/New York: Routledge.

    Google Scholar 

  • Danielson, P. (Ed.). (1998a). Modeling rationality, morality, and evolution. New York: Oxford University Press.

    Google Scholar 

  • Danielson, P. (1998b). Evolutionary models of co-operative mechanisms: Artificial morality and genetic programming. In P. Danielson (Ed.), Modeling rationality, morality, and evolution. New York: Oxford University Press.

    Google Scholar 

  • Dawid, H. (2012). Adaptive learning by genetic algorithms: Analytical results and applications to economical models. Berlin: Springer.

    Google Scholar 

  • De Jong, K. A. (2006). Evolutionary computation. Cambridge: MIT Press: A Bradford Book.

    Google Scholar 

  • DeMoss, D. (1998). Aristotle, connectionism, and the morally excellent brain. In 20th WCP proceedings. Boston: Paideia Online Project.

    Google Scholar 

  • Dewey, D. (2011). Learning what to value. In J. Schmidhuber & K. Thórisson (Eds.), Artificial General Intelligence. 4th International Conference, AGI 2011 Mountain view, CA, USA, August 3–6, 2011 Proceedings (pp. 309–314). Springer Berlin Heidelberg.

    Google Scholar 

  • Doris, J. M. (1998). Persons, situations, and virtue ethics. Noûs, 32, 504–530. doi:10.1111/0029-4624.00136.

    Article  Google Scholar 

  • Enemark, C. (2014). Armed drones and the ethics of war: Military virtue in a post-heroic age (War, conduct and ethics). London: Routledge.

    Google Scholar 

  • Evins, R., Vaidyanathan, R., & Burgess, S. (2014). Multi-material compositional pattern-producing networks for form optimisation. In A. I. Esparcia-Alcázar & A. M. Mora (Eds.), Applications of evolutionary computation. Berlin/Heidelberg: Springer.

    Google Scholar 

  • Flanagan, O. J. (2007). The really hard problem: Meaning in a material world. Cambridge: MIT Press.

    Google Scholar 

  • Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds & Machines, 14, 349–379.

    Article  Google Scholar 

  • Franklin, S., & Graesser, A. (1997). Is it an agent, or just a program? a taxonomy for autonomous agents. In J. P. Müller, M. J. Wooldridge, & N. R. Jennings (Eds.), Intelligent agents III agent theories, architectures, and languages (pp. 21–35). Springer Berlin Heidelberg.

    Google Scholar 

  • Galliott, J. (2015). Military robots: Mapping the moral landscape. Surrey: Ashgate Publishing Ltd.

    Google Scholar 

  • Gärdenfors, P. (2000). Conceptual spaces the geometry of thought. A Bradford Book. Cambridge: MIT Press.

    Google Scholar 

  • Gauthier, D. (1987). Morals by agreement. Oxford: Oxford University Press.

    Google Scholar 

  • Gips, J. (1995). Towards the ethical robot. In K. M. Ford, C. N. Glymour, & P. J. Hayes (Eds.), Android epistemology. Menlo Park: AAAI Press/MIT Press.

    Google Scholar 

  • Goertzel, B., & Pennachin, C. (2007). Artificial general intelligence (Vol. 2). Berlin: Springer.

    Book  Google Scholar 

  • Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23, 101–124. doi:10.1080/1047840X.2012.651387.

    Article  Google Scholar 

  • Guarini, M. (2006). Particularism and the classification and reclassification of moral cases. IEEE Intelligent Systems, 21, 22–28. doi:10.1109/MIS.2006.76.

    Article  Google Scholar 

  • Guarini, M. (2011). Computational neural modeling and the philosophy of ethics reflections on the particularism-generalism debate. In M. Anderson & S. L. Anderson (Eds.), Machine Ethics. Cambridge: Cambridge University Press.

    Google Scholar 

  • Guarini, M. (2012). Conative dimensions of machine ethics: A defense of duty. IEEE Transactions on Affective Computing, 3, 434–442. doi:10.1109/T-AFFC.2012.27.

    Article  Google Scholar 

  • Guarini, M. (2013a). Case classification, similarities, spaces of reasons, and coherences. In: M. Araszkiewicz M, & J. Šavelka (Eds.), Coherence: Insights from philosophy, jurisprudence and artificial intelligence. Springer, pp 187–201

    Google Scholar 

  • Guarini, M. (2013b). Moral case classification and the nonlocality of reasons. Topoi, 32, 267–289. doi:10.1007/s11245-012-9130-2

  • Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. Cambridge: MIT Press.

    Google Scholar 

  • Hardin, G. J. (1974). Lifeboat ethics. Bioscience, 24, 361–368.

    Google Scholar 

  • Hoffman, M. (1991). Empathy, social cognition, and moral action. In W. M. Kurtines, J. Gewirtz, & J. L. Lamb (Eds.), Handbook of moral behavior and development: Volume 1: Theory. Hoboken: Psychology Press.

    Google Scholar 

  • Holland, J. H. (1975). Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence. (2nd ed.). Bradford Books, 1992. University of Michigan Press.

    Google Scholar 

  • Horgan, T., & Timmons, M. (2009). Analytical moral functionalism meets moral twin earth. In I. Ravenscroft (Ed.), Minds, ethics, and conditionals. Oxford: Oxford University Press.

    Google Scholar 

  • Howard, D., & Muntean, Ioan. (2014). Artificial moral agents: Creative, autonomous, social. An approach based on evolutionary computation. In Proceedings of Robo-Philosophy. Frontiers of AI and Applications. Amsterdam: IOS Press.

    Google Scholar 

  • Howard, D., & Muntean, I. (2016). A minimalist model of the artificial autonomous moral agent (AAMA). The 2016 AAAI spring symposium series SS-16-04: Ethical and moral considerations in non-human agents. The Association for the Advancement of Artificial Intelligence (pp. 217–225).

    Google Scholar 

  • Human Rights Watch. (2013). US: Ban fully autonomous weapons. Human Rights Watch.

    Google Scholar 

  • Hursthouse, R. (1999). On virtue ethics. Oxford/New York: Oxford University Press.

    Google Scholar 

  • Jackson, F. (1998). From metaphysics to ethics a defence of conceptual analysis. Oxford/New York: Clarendon Press.

    Google Scholar 

  • Jackson, F., & Pettit, P. (1995). Moral functionalism and moral motivation. The Philosophical Quarterly, 45, 20–40. doi:10.2307/2219846.

    Article  Google Scholar 

  • Johnson, M. (2012). There is no moral faculty. Philosophical Psychology, 25, 409–432.

    Article  Google Scholar 

  • Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.

    Google Scholar 

  • Kitcher, P. (2011). The ethical project. Cambridge: Harvard University Press.

    Book  Google Scholar 

  • Koza, J. R. (1992). Genetic programming: On the programming of computers by means of natural Selection. Cambridge: MIT Press.

    Google Scholar 

  • Kuorikoski, J., & Pöyhönen, S. (2013). Understanding nonmodular functionality: Lessons from genetic algorithms. Philosophy of Science, 80, 637–649. doi:10.1086/673866.

    Article  Google Scholar 

  • Ladyman, J., Lambert, J., & Wiesner, K. (2012). What is a complex system? European Journal for Philosophy of Science, 3, 33–67. doi:10.1007/s13194-012-0056-8.

    Article  Google Scholar 

  • Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines: Journal for Artificial Intelligence, Philosophy, and Cognitive Science, 17, 391–444.

    Article  Google Scholar 

  • Lewis, D. K. (1983). Philosophical papers. New York: Oxford University Press.

    Google Scholar 

  • Litt, A., Eliasmith, C., & Thagard, P. (2008). Neural affective decision theory: Choices, brains, and emotions. Cognitive Systems Research, 9, 252–273. doi:10.1016/j.cogsys.2007.11.001.

    Article  Google Scholar 

  • Liu, H., Gegov, A., & Cocea, M. (2016). Rule based systems for big data (Studies in big data. Vol. 13). Cham: Springer International Publishing.

    Google Scholar 

  • McDowell, J. (1979). Virtue and reason. The Monist, 62, 331–350.

    Article  Google Scholar 

  • Michael, J. (2014). Towards a consensus about the role of empathy in interpersonal understanding. Topoi, 33, 157–172. doi:10.1007/s11245-013-9204-9.

    Article  Google Scholar 

  • Mitchell, T. M. (1997). Machine learning (1st ed.). New York: McGraw-Hill Science/Engineering/Math.

    Google Scholar 

  • Mitchell, S. D. (2012). Unsimple truths: Science, complexity, and policy. Reprint edition. Chicago: University Of Chicago Press.

    Google Scholar 

  • Mitra, S., Das, R., & Hayashi, Y. (2011). Genetic networks and soft computing. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 8, 94–107. doi:10.1109/TCBB.2009.39.

    Article  Google Scholar 

  • Monsó, S. (2015). Empathy and morality in behaviour readers. Biology and Philosophy, 30, 671–690. doi:10.1007/s10539-015-9495-x.

    Article  Google Scholar 

  • Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. Intelligent Systems, IEEE, 21, 18–21.

    Article  Google Scholar 

  • Muntean, I. (2014). Computation and scientific discovery? a bio-inspired approach. In H. Sayama, J. Reiffel, S. Risi, R. Doursat, & H. Lipson (Eds.), Artificial Life 14. Proceedings of the fourteenth international conference on the synthesis and simulation of living systems. New York: The MIT Press.

    Google Scholar 

  • Murphy, K. P. (2012). Machine learning a probabilistic perspective. Cambridge: MIT Press.

    Google Scholar 

  • Nichols, S. (2001). Mindreading and the cognitive architecture underlying altruistic motivation. Mind & Language, 16, 425–455. doi:10.1111/1468-0017.00178.

    Article  Google Scholar 

  • Nickles, T. (2009). The strange story of scientific method. In J. Meheus & T. Nickles (Eds.), Models of discovery and creativity (1st ed., pp. 167–208). Springer.

    Google Scholar 

  • Nussbaum, M. C. (1986). The fragility of goodness: luck and ethics in Greek tragedy and philosophy. Cambridge: Cambridge University Press.

    Google Scholar 

  • Pereira, L. M., & Saptawijaya, A. (2016). Programming machine ethics Springer International Publishing.

    Google Scholar 

  • Railton, P. (1986). Moral realism. The Philosophical Review, 95, 163–207. doi:10.2307/2185589.

    Article  Google Scholar 

  • Rawls, J. (1958). Justice as fairness. The Philosophical Review, 67, 164–194. doi:10.2307/2182612.

    Article  Google Scholar 

  • Regan, T. (1983). The case for animal rights. Berkeley: University of California Press.

    Google Scholar 

  • Richards, D. (2014). Evolving morphologies with CPPN-NEAT and a dynamic substrate. In ALIFE 14: Proceedings of the fourteenth international conference on the synthesis and simulation of living systems (pp. 255–262). New York: The MIT Press. doi:10.7551/978-0-262-32621-6-ch042.

  • Robinson, Z., Maley, C. J., & Piccinini, G. (2015). Is consciousness a spandrel? Journal of the American Philosophical Association, 1, 365–383. doi:10.1017/apa.2014.10.

    Article  Google Scholar 

  • Russell, D. C., & Miller, C. B. (2015). How are virtues acquired? In M. Alfano (Ed.), Current controversies in virtue theory (1st ed.). New York: Routledge.

    Google Scholar 

  • Savulescu, J., & Maslen, H. (2015). Moral enhancement and artificial intelligence: Moral AI? In J. Romportl, E. Zackova & J. Kelemen (Eds.), Beyond artificial intelligence. Springer International Publishing.

    Google Scholar 

  • Schmidt, M., & Lipson, H. (2009). Distilling free-form natural laws from experimental data. Science, 324, 81–85. doi:10.1126/science.1165893.

    Article  Google Scholar 

  • Shalizi, C. R., & Crutchfield, J. P. (2001). Computational mechanics: Pattern and prediction, structure and simplicity. Journal of Statistical Physics, 104, 817–879.

    Article  Google Scholar 

  • Sharkey, N. E. (2012). The evitability of autonomous robot warfare. International Review of the Red Cross, 94, 787–799. doi:10.1017/S1816383112000732.

    Article  Google Scholar 

  • Sidgwick, H. (1930). The methods of ethics. London: Macmillan and Co, Ltd.

    Google Scholar 

  • Sørensen, M. H. (2004). The genealogy of biomimetics: Half a century’s quest for dynamic IT. In A. J. Ijspeert, M. Murata & N. Wakamiya (eds.) Biologically inspired approaches to advanced information technology. Lausanne/Berlin/New York: Springer.

    Google Scholar 

  • Stanley, K. O., & Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. Evolutionary Computation, 10, 99–127. doi:10.1162/106365602320169811.

    Article  Google Scholar 

  • Stanley, K. O., D’Ambrosio, D. B., & Gauci, J. (2009). A hypercube-based encoding for evolving large-scale neural networks. Artificial Life, 15, 185–212.

    Article  Google Scholar 

  • Strawser, B. J. (2013). Guest editor’s introduction the ethical debate over cyberwar. Journal of Military Ethics, 12, 1–3. doi:10.1080/15027570.2013.782639.

    Article  Google Scholar 

  • Strevens, M. (2008). Depth: An account of scientific explanation. Cambridge: Harvard University Press.

    Google Scholar 

  • Suárez, M., & Cartwright, N. (2008). Theories: Tools versus models. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 39, 62–81.

    Article  Google Scholar 

  • Sun, R. (2013). Moral judgment, human motivation, and neural networks. Cognitive Computation, 5, 566–579. doi:10.1007/s12559-012-9181-0.

    Article  Google Scholar 

  • Suthaharan, S. (2016). Machine learning models and algorithms for big data classification. Boston: Springer US.

    Google Scholar 

  • Swanton, C. (2003). Virtue ethics. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Tomassini, M. (1995). A survey of genetic algorithms. Annual Reviews of Computational Physics, 3, 87–118.

    Article  Google Scholar 

  • Tonkens, R. (2009). A challenge for machine ethics. Minds & Machines, 19, 421–438. doi:10.1007/s11023-009-9159-1.

    Article  Google Scholar 

  • Tonkens, R. (2012). Out of character: On the creation of virtuous machines. Ethics and Information Technology, 14, 137–149. doi:10.1007/s10676-012-9290-1.

    Article  Google Scholar 

  • Trappl, R., (Ed). (2013). Your virtual butler. Berlin/Heidelberg: Springer.

    Google Scholar 

  • Trappl, R. (2015). A construction manual for Robots’ ethical systems: Requirements, methods, implementations. Cham: Springer.

    Book  Google Scholar 

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    Google Scholar 

  • Turing, A. (1992). Mechanical Intelligence. In D. C. Ince (Ed.), The collected works of A. M. Turing: Mechanical intelligence. Amsterdam/New York: North-Holland.

    Google Scholar 

  • Wallach, W.. (2014). Ethics, law, and governance in the development of robots. In R. Sandler (Ed.), Ethics and emerging technologies (pp. 363–379). New York: Pagrave

    Google Scholar 

  • Wallach, W. (2015). A dangerous master: How to keep technology from slipping beyond our control. New York: Basic Books.

    Google Scholar 

  • Wallach, W., Franklin, S., & Allen, C. (2010). A conceptual and computational model of moral decision making in human and artificial agents. Topics in Cognitive Science, 2, 454–485. doi:10.1111/j.1756-8765.2010.01095.x.

    Article  Google Scholar 

  • Watson, R. A., & Szathmáry, E. (2016). How can evolution learn? Trends in Ecology & Evolution. doi:10.1016/j.tree.2015.11.009.

  • Wingspread participants. (1998). Wingspread statement on the precautionary principle.

    Google Scholar 

  • Yao, X. (1999). Evolving artificial neural networks. Proceedings of the IEEE, 87, 1423–1447. doi:10.1109/5.784219.

    Article  Google Scholar 

  • Zadeh, L. A. (1994). Fuzzy logic, neural networks, and soft computing. Communications of the ACM, 37, 77–84. doi:10.1145/175247.175255.

    Article  Google Scholar 

  • Zangwill, N. (2000). Against analytic moral functionalism. Ratio: An International Journal of Analytic Philosophy, 13, 275–286.

    Article  Google Scholar 

  • Zenker, F., & Gärdenfors, P. (Eds.). (2015). Applications of conceptual spaces. Cham: Springer.

    Google Scholar 

Download references

Acknowledgments

We received constructive comments, helpful feedback and encouraging thoughts at different stages of this project. We want to thank Colin Allen, Anjan Chakravartty, Patrick Foo, Ben Jantzen, Dan Lapsley, Michael Neelon, Tom Powers, Tristram McPherson, Abraham Schwab, Emil Socaciu, Wendell Wallach, an anonymous reviewer for this journal, colleagues from the University of Notre Dame, Indiana Purdue University in Fort Wayne, University of North Carolina, Asheville, and the University of Bucharest, Romania, who supported this project. We want also to express our gratitude to the organizers, referees, and the audience of the following events: “Robo-philosophy 1” conference (Aarhus, Denmark, June 2014); APA Eastern Division meeting (December 2014); APA Central Division meeting (February 2015); the Department of Philosophy at Virginia Tech; the CCEA/ICUB Workshop on artificial morality at the University of Bucharest (June 2015); the IACAP-CEPE-INSEIT joint conference at the University of Delaware (June 2015) and the IACAP conference in Ferrara, Italy (June 2016); the Department of Philosophy at Purdue University; and the “The Ethical and Moral Considerations in Non-Human Agents” (EMCAI) symposium of the AAAI held at Stanford University (March 2016). This paper is complemented by two papers: (Howard and Muntean 2014; Howard and Muntean 2016). The MATLAB code and the data used are available upon request: please contact the corresponding author.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ioan Muntean .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Howard, D., Muntean, I. (2017). Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency. In: Powers, T. (eds) Philosophy and Computing. Philosophical Studies Series, vol 128. Springer, Cham. https://doi.org/10.1007/978-3-319-61043-6_7

Download citation

Publish with us

Policies and ethics