The mysterious complexity of our life is not to be embraced by maxims…to lace ourselves up in formulas of that sort is to repress all the promptings and inspirations that spring from growing insight and sympathy…from a life vivid and intense enough to have created a wide fellow-feeling with all that is human.

George Elliot (Mary Anne Evans)

1 Introduction: Our Fascination with Designing Human-like Things and the Danger

We are concerned of the future of genuine humanness. In this paper, we will ask: “How human should computer-based human-likeness appear?” Today we live with computers as if they were humans - our employers, employees, personal assistants, best friends, life companions, family members and casual acquaintances. We nurture our devices as if they were living beings and many suffer psychological distress when separated from them (Männikkö et al. 2017; Paik et al. 2014). By “them” we mean computer-based human-like creatures - essentially all software that emulates some characteristic of humanness be the interaction in the form of text, video, audio, holograms, robots, androids etc. Many of us are already acquainted with the simpler cousins of what is to come. Digital assistants such as Apple’s Siri, Amazon’s Alexa, Google Assistant and Microsoft’s Cortana increasingly invade our work, homes, and cars to name just a few affected spaces. It is estimated that by 2021 digital assistants will outnumber humans so in every space humans dwell there will be more of them.

We seem to welcome our human-like companions judged by the amount of time we spend interacting with them. It is acceptable, desirable and completely normal to replace our human interactions with comparable communications with a device. We are used to the idea that some of us prefer digital over real humans as life companions (Turkle 2011). Many of us have no personal conscious recollection on how people used to live together before adopting computers as one of the main ways we connect with the human realm outside of ourselves. Indeed, for some of us, a computer-based replica may be the closest thing to human we meet any given day.

The excitement of creating machines with increasingly human qualities, including personalities, knows no bounds. When we presented our concern for the future of humanness an expert of realistic digital humans exclaimed: “how remarkably unadventurous.”Footnote 1 His view summarizes the lure of creating human-likeness: It is fun for the software engineers; a lucrative business for investors; a new frontier for researchers of many disciplines and an exciting playground for those designers who grew up with videogames, computers and phones. The purpose of this paper is to stop for a moment and question the sanity of humanity’s increasing infatuation to design and bond with human-like things.

Reviewing the IS literature (or any other reference literature) for papers questioning automating human characteristics yields slim pickings. A distinguishable elation seems to surround the discovery of what of humanness could be automated next. Robots are coming and no one has doubts. In 2017, Federation of Robotics forecasted there will be 1.2 million service robots by 2019.Footnote 2 Statistics suggest that computer robotics already takes up about 7.1% of the IT market.Footnote 3 The task now is to perfect androids, artificial systems with a design goal to become indistinguishable from human appearance and behavior in their ability to sustain natural relationships (Stock 2019; MacDorman and Ishiguro 2006; Ramey 2005). Robot psychology (Stock 2019) and measures for how human and how intelligent we consider our digital companions to be (perceived robot anthropomorphism) are in the works (Moussawi and Koufaris 2019). Defining seamless connections between humans and objects in order to augment both is underway (Cena et al. 2019). No one seems to stop to ask if creating more digital companions that appear increasingly human is really beneficial to the future of our species in the long run.Footnote 4

Philosophers have articulated their views on what computers should and should not do drawing the line at judgment and arguing computers should be programmed to calculate and decide, but never to judge (Cf., Graubard 1989; Kast and Rosenzweig 1970, 1972; Dreyfus 1979; Dreyfus and Dreyfus 1986, 1989; Pagels 1989; Fitzgerald 1996). This artificial intelligenceFootnote 5 (AI) debate, however, largely fell on depth ears in the designer community. We feel it is essential to keep asking the question “How human should computer based human-likeness appear?” because of the progress we have made in creating human-appearing machines since the original discourse.

We will argue in two parts that we should proceed with caution when we automate machines to express human emotions.Footnote 6 First, we will apply concepts from the speech act theory to argue that there is a certain class of speech acts - called expressive speech acts - that we should implement in human-like designs only after careful consideration. Expressive speech acts convey how a person feels and since computers don’t have feelings or genuine psychological states, a digital assistant cannot meaningfully produce expressive speech acts, and thus, be sincere.

Second, we will elaborate on why machines that routinely express emotions may endanger humanness. For this purpose we will adopt a theory of humanness as a collective level, evolutionary phenomenon (Porra 1996, 1999, 2010). According to this theory, humanness evolves when we bond with other humans through feelings and requires feeling together over long time horizons. When we substitute humans for digital assistants that routinely express human emotions without really feeling them and spend an increasing amount of time with these machines instead of people our humanness may be endangered. When machines try to express human-like emotions they will worsen matters over time by moving us to even more technocratic ways of thinking about ourselves. As a result, our genuine humanness may be destroyed.

With this paper, we call for a multidisciplinary research effort on gaining better understanding of the impact of living with human-likeness long term on our humanness. Only when we are wiser about what we are doing to ourselves with our constant exposure to human-like machines will we be able to make informed decisions on whether and/or when machines should and should not express feelings. We owe a thorough investigation to future generations.

The paper is structured as follows: In section 2, we will introduce our conception of humanness and the basis for our bonding with others and things. In section 3, we will introduce automating humanness as an ethical question. In section 4, we will respond to our question: “How human should computer-based human-likeness appear?” relying on the speech act theory and the philosophy of humanness. In section 5, we will discuss our conclusion that living with feeling expressing human-likeness may endanger humanness long term in the context of the commentaries, questions and criticisms we have received over the course of writing this paper. In section 6, we will provide some direction for alternative designs for human-likeness and expressives. Finally, in section 7, we will provide some conclusions.

2 Humanness and Bonding with Other Humans and Things

Porra (1996, 1999, 2010), an IS researcher, has studied humanness and how we bond with computer-based information systems (CBISs) as an evolutionary collective level phenomenon founded on paleontology, evolutionary biology, existential philosophy, sociobiology and the AI debate. Her theory suggests that humans bond systematically, automatically and subconsciously with other humans to form what she calls human colonies comparable with animal colonies in nature and that our humanness is the result of this kind of systematic bonding over very long time periods. Other researchers have proposed that human group formation begins spontaneously and unconsciously within hours when humans encounter other humans (Campbell 1982).

Porra’s (1996, 1999, 2010) theory suggests that when humans bond with their computers in addition to (or instead of) other humans, the result is a variation of the human colony she calls an information colony. Porra proposes that humans form bonds with their CBISs as easily as if these were other human beings and provides a theory and mechanism of how this may alter the social evolution of humanness and the future of the species.

In nature, colonies have shown to have remarkable survival value. This type of systemic way of living together started 3 billion years ago. Humans are the only species that is actively attempting to create a digital version of itself to bond with, so we don’t yet know what the long-term impact of bonding with machines might be on the evolution of a species. There is little research on how information colonies might change the social evolution of humanness outside Porra and Parks’s (2006) relatively short-term study of high school student groups bonding via their lap top computers. A famous case study on Marcos Rodŕiguez Pantoja, a feral child who lived with wolves until brought back to civilization (Janer Manilla 1979) suggests, however, that disrupting human bonding with other humans may indeed impact our humanness. Reportedly, Pantoja had great difficulties adjusting to human life; was disappointed in human nature and longed to go back to living with wolves in the mountains. Porra’s colonial systems theory suggests that choices humans make on their life companions - whether living beings or machines – can have a significant impact on the fundamental characteristics of our species and alter humanity’s future over generations in largely unknown ways. Mahroof et al.’s (2018) study on technology’s impact on the next generation suggests that technology can steer children away from cultural inter-reliance toward technology centered independence.

Porra’s (1996, 1999, 2010) theory is based on a vertical evolutionary perspective - a very long-term social evolution of collective level humanness. Her theory emphasizes that every human colony and thus its humanness has uniqueness that is a result of it living through its specific contexts of space and time. In contrast, the current efforts to create digital human-likeness are founded on ideas kin to horizontal evolution - classification and categorization of things such as personality traits, characteristics and behaviors that are commonly assumed to be universal (at least within class) and independent from time and context; or on studying psychological differences at the sensorimotor, emotional, cognitive or social levels (Stock 2019; Libin and Libin 2004). Because the vertical and horizontal evolutionary theories of humanness are largely independent from one another, we can use Porra’s theory of humanness as a useful theoretical backdrop for examining the impact of digital human-likeness on humanness.

Another reason for adopting Porra’s theory of humanness is her life-long passion for understanding the systemic premises of: “What makes us human?”; “How does our humanness change and evolve over time?” and “How do other humans and CBISs impact our humanness over very long time periods?” Morally, she resides on the side of humanity for the sake of its future generations and immerses herself in enhancing our understanding of the long term impact of living and bonding with humans and increasingly human-like technologies on our species. Her stance is in stark contrast with the current efforts to create realistic digital humans largely led by software designers, technology experts, business leaders, investors and psychologists with diverse and often short terms goals.

Another important background for this paper is Turkle’s (2011) extensive work in social studies of science and technology at MIT where she has done research on the ways humans bond with things and technology. We are not surprised to find out that things about our past like books, photographs, keepsakes and other familiar items make us feel connected to that past. We are on less familiar grounds, however, with “evocative objects” as companions to our emotional lives:

We think with the objects we love; we love the objects we think with…the object brings together intellect and emotion. An object is a companion in life experience. In every case, the author’s focus is not on the object’s instrumental power – how fast the train travels or the computer calculates – but on the object as a companion in life experience: how the train connects emotional worlds, how the mental space between computer keyboard and screen creates an erotic possibility. (Turkle 2007, p. 5)

Our feelings connect us with our things: A young child believes her bunny rabbit can read her mind; a diabetic is one with his glucometer. Even things of science can be seen as objects of passion. We feel a connection with digital assistants on our phones, laptops and in our cars. Theorists Jean Baudrillar, Jacques Derrida, Sigmund Freud, Donna Haraway, Karl Marx, and D. W. Winnicot have invited us to a better understanding of object intimacy (Turkle 2007). Freud, for example, suggested that we deal with a loss of a thing in a way similar to losing a person: The process of attaching ourselves to things via emotions ends when we find the thing we feel about inside our being. The psychodynamic tradition suggests that we make objects to be part of ourselves and offers a language for interpreting the intensity of our connections with the world of things and for discovering similarities and differences in how we relate to the animate and inanimate. By confronting objects, we shape ourselves. Turkle has significantly enhanced our understanding of the meaning of objects in our lives, but how do we shape ourselves when the object we confront is a human-like machine and with what consequences?

3 Automating Humanness as an Ethical Question

Essentially, we are asking what computers should and should not do, which is a variation of the AI debate question: “What computers can and cannot do?” and these questions are related. In the 1970’s, Joseph Weizenbaum (1976), creator of one of the first computer programs with a natural language interface called ELIZA, was astonished that people thought his computer program actually could understand them and argued that computers should be programmed to decide but never to judge. Since the early days of computing at the core of the philosophical AI debate have been comparisons of human abilities with the characteristics of machines in order to ascertain what tasks humans can do better and what should be left for the machines (Cf., Graubard 1989; Kast and Rosenweig 1970, 1972; Dreyfus 1979; Dreyfus and Dreyfus 1986, 1989; Pagels 1989; Fitzgerald 1996). The difference between now and then is that today the AI debate should increasingly be about ethical concerns.

From a global ethical management perspective,Footnote 7 we should ask: “What is a responsible way of automating human characteristics?” The problem is that we don’t really know what we are replicating because of the unanswered questions surrounding genuine humanness. For example, what is “life” (Porra 1999) or “self” (Parks and Steinberg 1978) is still a mystery and the basic fact remains that humanness only exists in our biological bodies. The way our evolution has shaped us has had clear survival value, which we should be interested in preserving for the future generations. If there is even an outside chance that our short term economic and technical goals can hurt our life and our sense of who we are as human beings long term, we should put forward our best effort to better understand what we are committing ourselves to with increasingly relying on human-likeness instead of genuine humanness. We are by no means the first in expressing deep concerns on behalf of humanity (cf., philosophers such as Dreyfus and Dreyfus 1986; and Heidegger 1953, 1977) but technology has since advanced significantly and philosophical and ethical debates must be revisited for each new generation (Schultze and Mason 2012).Footnote 8

Since the AI debate, our ability to automate human-likeness has dramatically improved. Unlike ELIZA that displayed green text on black screens, today’s digital humans use moving images, holograms, and physical devices to convince users of their human-likeness (see Fig. 1). In the digital assistant arena, for example, IPsoft’s Amelia is described as a “virtual cognitive agent” that claims, “I take in your emotional state so I can empathize”. Amelia is programmed to adapt its verbal and facial responses based on a human’s level of arousal, dominance, and pleasure. Intuition Robotics’ ElliQ is a physical device described as an “active aging companion that helps older adults stay active and engaged with a proactive social robot.” Gatebox Labs built Azuma is depicted as a holographic girl who lives in a bell jar. It is positioned as possessing “advanced friendship capabilities that makes her more of a humanoid”. In one commercial of the product, a single man returning from work on a bus sends an IoS message to Azuma, “I’ll be home soon.” The device replies, “Can’t wait to see you.” When the young man enters his unoccupied apartment, the hologram springs to action and utters, “Missed you darling!” (Humphries 2016). Digital assistants are an example of computer based products that are increasingly designed to be more like acting and feeling humans.

Fig. 1
figure 1

Software applications programmed to simulate human emotions. 1https://portinos.com/wp-content/uploads/2017/11/amelia-555x402.jpg. 2https://portinos.com/wp-content/uploads/2017/11/amelia-768x557.jpg. 3https://assets.pcmag.com/media/images/526888-gatebox-virtual-home-robot.jpg?thumb=y&width=810&height=456

The term artificial personality (AP) has recently been coined to describe the “science” of designing digital assistants like Apple’s Siri, Amazon’s Alexa, Google Assistant and Microsoft’s Cortana to appear to have unique personalities that express emotions and display behavioral quirks. Siri, for example, was designed to be sassy (Rouse 2018). The purpose of AP is to meet user’s desire for applications to be more friendly and even a partner or a lover (Kanai and Fujimoto 2018). Digital assistants speak to us like a person saying things like “I am sorry”, “I apologize”, “I thank you”, or “I miss you.” Unlike with most traditional computer software, computer-based human-likeness such as digital assistants are not just providing information or completing transaction requests. Applications like Intuition Robotics’ ElliQ and Gatebox’s Azuma are designed to meet our emotional needs for care, companionship, and love.

By 2021 digital assistants alone are said to outnumber humans (Ovum 2016) and this is just the first baby step towards a realistic digital human explosion. The sheer numbers of human-like creatures soon roaming around the planet overcrowding humanity calls for a robust ethical discourse on what kind of humanness are we automating and what will be the impact on us? Yet, we found no research that would have addressed the ethical issue of machines expressing feelings. Philosophers say little about emotional exchanges between us and our human-like machines. Austin and Habermas don’t mention computers. Searle holds that computers cannot understand language or mean what they say. Computer science (e.g. Sowa 2002), cognitive psychology (e.g., Winograd and Flores 1986), law (e.g., Tien 2000), and IS (e.g. Janson and Woo 1995) researchers have applied speech act theory to areas such as programming languages and IS development methodologies but have raised no questions about computers expressing human emotions (except for Sowa 2002).

4 Can Digital Human-Likeness Endanger Humanness?

Our question - “How human should computer-based human-likeness appear?” – is urgent but daunting. How does one find answers to a complex question like this? We believe we need to take small steps into this new research area until greater wisdom emerges. As an example, we have set out to identify one area where we should better understand the implications of automating humanness on future generations. We chose feelings as an example of a central quality of genuine humanness we need to study more in the context of digital human-likeness. We will now rephrase our question: “What are the consequences of living with human-likeness that routinely expresses feelings on genuine humanness?” In the following, we will present a two-part philosophical discussion on the topic based on the speech act theory (Austin 1962; Searle 1968, 1969, 1979; Habermas 1976, 1984; Klein and Huynh 2004; Smith 2003) and the philosophy of humanness framed by Porra’s theory of humanness we introduced above (Porra 1999, 2010; Heidegger 1977, 1985).

4.1 Speech Act Theory and Expressing Feelings by Humans and Machines

In this section, we will further narrow our discussion on speech. Of the many ways digital assistants can express feelings, we chose speech because it is a fundamental and unique aspect of humanness that is commonly emulated in digital assistants. The theory of speech is also well developed with useful classifications. It provides a good basis for studying the human-likeness of digital assistants for comparisons with genuine humanness. In the following sections, we will discuss the fundamental reasons why we need responsible ways of programming digital assistants to express human feelings.

4.1.1 Human Speech Acts and Expressives

Speech act theory is about how humans use language for an effect. It has long roots in philosophy going back to Aristotle, who applied it to peripheral realms of rhetoric and poetry (Smith 2003). Since then many authors have attempted to develop a general theory of language use, most notably, Searle, who made the distinction between “just uttering sounds” and “performing speech acts” and uses of speech for “meaning something”.

Speech acts come in many types. Scholars initially analyzed constative speech acts -statements about facts in the world that are true or false depending on whether the statement corresponds to facts in the world or not (Austin 1962). Wittgenstein (1922, 1961) attempted to describe the conditions for a logical language that perfectly asserts or denies facts only to later abandon his quest to argue that in reality, the meaning of language is determined by its use in the context of “language-games.” Such games involve constative speech acts (i.e., “describing the appearance of an object” or “reporting an event”) but can also be about “giving orders”, “making a joke”, “playing-acting”, “testing a hypothesis” and “making up a story” (Wittgenstein 1953, 2001). Wittgenstein’s conclusion was that in language games, speech act meanings vary depending on qualities outside logic such as tonal variety. Since then speech act theory has developed further to include performative speech acts or utterances that perform an action: Saying it makes it so. Examples of performative speech acts include “I now pronounce you man and wife”; “I name this ship the Queen Elizabeth”; “I bequeath my watch to my brother”; “I bet you six pence it will rain tomorrow”; “Strike three, you’re out!” (Austin 1962).

Searle (1979) expanded on Austin’s work and defined five types of speech acts (see Table 1). He focused on illocutionary points or the characteristic aims of speech acts, which include Assertives, Directives, Commissives, Declaratives, and Expressives. Assertives are statements of fact about the world. Directives attempt to get the hearer to perform an action. Commissives commit the speaker to perform an action. Declaratives bring about a change to the intuitional world—such as declaring war or naming a ship. Expressives, which are at the core of our interest in this paper, express the speaker’s internal psychological state. They are characterized by statements that sincerely express a psychological belief about the person’s subjective world of thoughts and emotions (Searle 1979). These five illocutionary points have been claimed to be exhaustive and mutually exclusive.

Table 1 Searle’s Five Types of Speech Acts

Table 1. Speech acts: conditions of satisfaction, felicity, and validity claims.

An utterance becomes a speech act when certain requirements are met. Searle calls these requirements, summarized in Table 1, “conditions of satisfaction”; Austin “felicity conditions” and Habermas “validity claims”. Unlike utterances, speech acts have communicative effects characterized by conditions of satisfaction based on direction of fit or the relationship between words and the world (Searle 1983). For example, if we say “Mike has ten fingers,” Mike should indeed have ten fingers. Expressives are different from all other speech acts that require a word to world relationship in that they express a person’s inner psychological states. The only requirement is that the person must be sincere about them. For example, if a person expresses thanks, that person must meet the sincerity condition “Speaker feels appreciative” (Searle 1969, p. 67).

For Austin (1962), an utterance becomes a speech act when felicity conditions - conditions under which words can be used properly to perform actions – are met. For example, only authorized people may perform marriage ceremonies or declare pitches in a baseball game. Austin defined six specific conditions for felicitous performative speech acts: (1) a societal convention for the speech act must exist; (2) the person must be authorized to utter the speech act; (3) the speech act must be carried out properly and (4) completely; (5) the person performing the speech act must be sincere, and (6) the person must subsequently behave as intended. We are concerned with Austin’s fifth condition - sincerity - in the context of expressive speech acts. According to Austin, violating the fifth assumption constitutes language “abuse” (Austin 1975 edition, p. 16).

Habermas (1984) adds one more useful perspective on the speech act theory. He too emphasizes sincerity as a necessary factor in communicative speech acts and defines communicative speech acts as speech acts “aimed at accomplishing mutual understanding between two actors.” Thus, a speech act has four validity claims: comprehensibility, truth, legitimacy, and veracity (Habermas 1976). Comprehensibility is validity with respect to the semantic content of the sentences used in an utterance. Truth is defined as propositional truth—that an utterance matches the objective world. Legitimacy is the validity claim that a speech act conforms to social norms in the social world—be these institutionalized norms or more informal norms. Veracity is the validity claim that the speaker is acting sincerely, honestly, and in good faith. We are interested in veracity because it is the validity claim about the subjective world of our internal thoughts and emotions.

Searle, Austin, and Habermas clearly define the conditions of satisfaction, felicity, and validity claims for expressive speech acts: In order to convey meaning with an expressive speech act, a human must meet the felicity check of sincerity and the validity claim of veracity. Searle, Austin, and Habermas developed speech act theory to address speech uttered or written by a human. To build our argument, we next explain why speech act theory applies to utterances produced by digital assistants.

4.1.2 Speech Acts Produced by Software Such as Digital Assistants

When speech act theory was first applied to the design of human-computer interfaces, the level of analysis changed from a single speech act to speech act series. Flores and Ludlow (1980) were among the first to apply the theory to modeling information systems (IS). They theorized that in office settings, people make commitments, which take several iterations of speech acts to complete. Flores’ and Ludlow’s Language-Action Perspective (LAP) embraced the concept that communication comprises both constative and performative speech acts. They viewed organizations as inter-related commitments created by the first four of Searle’s speech acts: directives, commissives, assertives, and declaratives. Notably, Flores and Ludlow ‘s analysis did not include expressives.

Winograd and Flores (1986) applied speech act theory to define “conversations for action” as chains of interactive speech acts. They too recognized that transactions require several speech acts between two interacting agents. For example, an agent initiates a request to a second one who may commit to respond; actually respond; reject the request; counter offer; or withdraw. The transaction is completed when both agents are satisfied or accept the withdrawal. Also Winograd and Flores include all speech acts in their “basic conversation for action” framework except expressives.

IS scholars have applied the concepts of speech act theory in the modeling and design of CBISs (cf., Ågerfalk and Eriksson 2002; Iivari et al. 1998; Lyytinen 1985) and in IS development methods (Auramäki et al. 1988, 1992; Hirschheim and Klein 1989; Janson and Woo 1995; Lehtinen and Lyytinen 1983; Lyytinen et al. 1987; Van Reijswoud and Mulder 1998).Footnote 9 They have also used the theory as a research tool to understand user behavior and system outcomes (e.g., Lacity and Janson 1994; Kumar and Becerra-Fernandez 2007; Kuo and Yin 2009; Scheyder 2004). More recently, Kuo and Yin (2009) applied the theory as a lens to understanding group support systems. This body of research exemplifies that software applications have been taken to produce speech acts at least since the 1980’s – well before digital assistants. Speech acts have also been used as a foundation to understand and model complex chains of exchanges between humans and machines to accomplish tasks and to produce outcomes for many decades. Since digital assistants are one type of software application, we can say that they produce speech acts.

4.1.3 The Meaning of Digital Assistants’ Expressive Speech Acts

Within the body of research on applying speech act theory to design software applications, we could only find one paper that included expressive speech acts. Sowa (2002) sees no problems with computers engaging in expressive speech acts and argues that programming languages should support all five types of speech acts including “behabitives,” which is Austin’s term for expressives. He specifically lists apologizing, thanking, deploring, congratulating, welcoming, or blessing as expressives for machines. Not finding more research on computers and expressive speech acts is surprising given how commonly machines express feelings nowadays. Our discussion is meant to be an opening in what we hope to become robust discourse and a beginning for a new research area on “How human should computer interfaces appear?”

Based on the speech act theory, we conclude that digital assistants meet the requirements for four of the five speech acts: They can produce meaningful assertives, directives, commissives, and declaratives. Using Searle’s conditions of satisfaction based on direction of fit, a digital assistant can assert a meaningful statement of fact. It can say, for example, “all operators are currently busy” if they actually are. It can direct users to perform an action, for instance, “say ‘confirm’ to submit your order”. It can make commitments with like, “A seat is reserved for you on flight 109.” It can declare new institutional facts in the world such as “Your application is now approved”. Four of the five types of speech acts can be meaningfully embedded into software without the need to express a psychological state.

When a digital assistant conveys an inner psychological state, however, we are on less clear grounds. When it says: “I am sorry that your credit card cannot be authorized”, “I hope you enjoy your new purchase”, “I am grateful for your business” or “I regret that the item requested is not in stock”, its speech act will never be sincere. Whose subjective psychological state is being sincerely expressed? Who is the person behind these speech acts who legitimately feels sorrow? Hope? Gratitude? Regret? Since computers do not have genuine human feelings, their expressive speech acts cannot be sincere and thus may be considered a misuse of language (Austin 1962). Every time a digital assistant uses an expressive, it is lying. Speech act theory helps us see clearly that there is one kind of speech act, expressives, that can only be satisfactorily delivered by genuine humanness - actual human beings who are able to have inner psychological states and feelings they express.

4.2 Philosophy of Humanness and the Evolution of Humanness as a Collective Level Phenomenon

In this section, we will explore what kind of consequences living with human-like machines that routinely express feelings they cannot have might have on genuine humanness long-term. We turn to philosophy of humanness in order to understand more about what role feelings play in the evolution of genuine humanness and the future generations of Homo sapiens.

4.2.1 Humanness as Bonding with Humans through Feelings

In spite of our enthusiasm around designing human-like machines, the nature of genuine humanness remains a mystery. Porra (1996, 1999, 2010) has theorized about how humanness evolves at the level of the collective and described some of the systemic characteristics of this evolution but what does her theory mean at the individual level of analysis and how does it connect with the expressive speech acts? What is it about human feelings that is needed for our collective humanness to live and evolve on? How might sharing our evolution with things that resemble us in many ways but have no feelings alter the course of the social evolution of the species?

There is no shortage of authors who have written about how feelings are at the core of being human. Long time ago, Aquinas wrote about human “virtue habitus” - our tendency to act in morally spontaneous ways with others (Bradshaw 2009, p. 46). Jung observed that we develop a morality, reciprocity and trust with others instinctually and that we are able to make good moral choices under diverse circumstances (Young-Eisendrath and Hall 1991). This “primary altruism” shows in an infant’s ability to relate and respond to the needs and feelings of caregivers and caregivers’ responses to these (p. 44). Thus a desire to care about others is considered fundamentally human (Stern 1985; Gilligan 1982; Belenky et al. 1986; Sroufe and Fleeson 1986). For Aquinas, humanness meant love and justice that comes from love (Bradshaw 2009). As we experience love within ourselves, it glues us to one another and connects us to other humans around us and across generations.

Essentially, being human means feeling our lives from one moment to another. At all times, we feel our insides and surroundings through all of our senses. We have feelings about our own past and future and the past and future of others. Feeling things through - sometimes over decades or entire lifetimes – is human. Feelings mold us, guide us, inform and warn us. We find them in our experiences and in our wisdom. Our humanness evolves as we feel together over long time horizons (Porra 1996, 1999, 2010).

4.2.2 Inheriting and Communicating our Humanness through Feelings

Porra (1996, 1999, 2010) holds that humanness carries its entire social evolutionary history within itself in the individual human beings and that this gives us the ability to steer our future. She suggests that each human colony lineage essentially has its own specific contextual past to draw upon. Jung has theorized that at least some of this history is more widely shared within the species (Johnson 1986). He believed that humanity shares “images” - primordial patterns of functioning in the world and with one another – that express both a form of behavior and the situation in which this behavior is released and that images make us human (Jung 1959):

These images are ‘primordial’ in so far as they are peculiar to whole species, and if they ever ‘originated’ their origin must have coincided at least with the beginning of the species. They are the “human quality” of the human being, the specifically human form of our activities (p.153).

Images thus are a way of seeing and sharing our human inheritance as members of our species. They are inborn within us when we are born. They live in our unconscious as “energy forms” that show up as feelings; attitudes; value systems; or entire personalities (Jung 1959; Johnson 1986, p. 29). Our predispositions and patterned responses of behavior and affect become visible in our relationships (Young-Eisendrath and Hall 1991). Porra (1996, 1999, 2010) believes that our ways of “being” together evolve in human colonies over long time periods through the experience of living every-day life in one another’s proximity as we encounter others through all of our senses.

Another Jung’s concepts that helps understand how we “are” with others are symbols - the image-making capacity of our psyche (Jung 1973). We capture emotional meaning in visual, aural, kinesthetic, and tactile forms. These symbols are not to be confused with signs such as in speech acts. The distinction between a linguistic sign and a symbol from Jung’s point of view is that a sign has a simple reference or set of references, whereas symbol points to multi-determined meanings. Our instincts are especially formed to aid our capacity to organize our unified psychic images. This organizing in turn is the foundation of symbolic representations. Jung’s theory of human instincts links the idea of symbolic archetypes with human relationships. Our relationships with other humans serve as “situational patterns” in which an archetype is activated. Jung’s observations help us visualize less obvious parts of humanness that live inside our bodies. We chose Jung’s perspectives on individual level humanness because Jungian practitioners recognize the complex nature of humanness. This shows in their easy shifting between understanding human feelings and instincts as conscious, subconscious, interpersonal and intra-psychic qualities of humanness that are present when humans are together.

Anthropologists remind us that we are inherently cultural beings (Lewis 1988). Jung’s term “psychic energy” refers to our general motivation, attention, and interest - whether conscious or unconscious - to know another human being (Young-Eisendrath and Hall 1991). Through ritual, ceremony, and mythology we transform our instinctual impulses into symbolic meanings and this psychic energy is the link between human instinct and culture (Jung 1973). Thus, symbol and instinct are inexorably related. The function of culture - expressed through family and society - is to initiate the individual to the transformative symbols, the metaphoric and metonymic models of self that will provide useful transitions from one experience of subjectivity to another (Eisendrath and Hall 1991). Every culture has a moral code that provides models for growing up and becoming a functional member of the society: for initiations into adulthood, marriage, parenting, loss, grief and death. Moral codes are more than rules: they are methods for symbolic transformation. When we live unselfconsciously within the rituals of culture, we are transformed by these codes through the various ceremonies and symbolic meanings of transition. Our humanness thus changes over time when we feel together.

The purpose of this brief illustration of some central aspects of humanness is to remind us of the complexity of genuine humanness. Images, symbols, instinct, psychic energy, ritual, ceremony, mythology and culture are some ways we “are” together consciously, subconsciously, unconsciously, interpersonally and intrapsychicly. Following Porra’s theory, meeting another human means encountering the entire evolutionary history of their humanness in complex and comprehensive ways we have yet to well understand. We have provided this short account to remind us of the vast differences between our emotional depths and capacities and digital humans that cannot feel at all.

4.2.3 Biological Body as the Foundation of Humanness

The necessary foundation of our humanness is our biological body (cf., Lakoff and Johnson 1999). In cybernetics, early IS researchers Parks and Steinberg (1978) theorized that our sense of self is founded on our biology and specifically on the symmetrical structure of our brain. Our humanness is based on our self-awareness and on our senses of sight, hearing, taste, smell, and touch. These senses are “feeling” senses because they enable us to feel what we see, hear, taste, smell and touch. Our body is covered with skin, a feeling organ, so we can feel life through it. From this viewpoint, a human life is a stream of feelings. We feel ourselves in the world and we feel the world with other people. The physical qualities of humans such as the location of the eyes and ears and the physical characteristics of the brain determine the ways in which we associate with ourselves, others and the world around us (Parks and Steinberg 1978; Weizenbaum 1976). Fundamentally, whatever human-likeness we create on machines is not genuine humanness because it lacks the physical characteristics of our body and the self-awareness that only occur in our biology.

4.2.4 Existential Philosophy and the Physiology of Humanness

According to Heidegger (1977), the root cause of the poor general understanding of humanness is that humanity’s ways of thinking about humanness are technocratic in nature. He reminds us that essentially, technology is nothing technological but a consequence of mechanistic thinking that began long before we implemented our first digital assistants. Humanity has traveled a long way toward embracing machine thinking over genuine humanness. “Once there was a time when “technē” meant bringing-forth of the true into the beautiful” (Heidegger 1977).

Our technocratic thinking about humanness is not harmless because it enframes genuine humanness. Human-likeness such as found in digital assistants creates boundaries on how we can reveal ourselves in our everyday encounters as human beings. In the 1970’s software applications typically limited human revealing to “antithetical and rigorously ordered” (Heidegger 1977, p. 27). Our interactions with software forced us into situations with no room for our fundamental human characteristics to appear. Today’s human-like software has more flexibility but - putting it into Porra’s (1996, 1999, 2010) terms - we cannot reveal our humanness when the interaction partner is a human-like machine because the machine cannot fully respond by joining “us” at the collective level of our humanness. Living with human-likeness as a replacement of genuine human interaction means that our “more original revealing and hence our experience to call for more primal truth” is constantly denied (Heidegger 1977).

Yet, we appear to form information colonies – bonds with machines - instinctively, intuitively and subconsciously (Porra 1996, 1999, 2010). Our innate need to bond with others and seemingly limitless capacity to imagine may be an unfortunate combination with physiological consequences. As humans, we can indulge so completely that it may be difficult for us to distinguish between a real and artificial human. Breznitz, a psychologist, has found that also our body can go with the imaginary (Siegel 1986). Ultimately what we imagine may even manifest itself in our physiology (Garfield 1984). Sobel has shown similar results in placebo studies (Locke and Colligan 1986). From the physiological perspective, it is conceivable that the qualities of our human-like creations become absorbed into our biology and become part of us. Since our humanness is inseparable from our biology living with machines that lack feelings may result in physiological changes in our bodily organs (Miller 1995).

It is possible that we respond by changing physically as we feel with things that cannot feel. Conceivably human-likeness has already been absorbed into our humanness. Porra’s (1996, 1999, 2010) theory suggests that living with digital human-likeness will eventually mold genuine humanness into being more like our machine companions as a function of time: The more time we spend with human-like machines the more we will resemble them. It is conceivable, that we will also incorporate into ourselves their lack of feelings endangering the future of our genuine humanness.

4.2.5 The Impact of Living Long-Term with Digital Assistants Expressing Feelings they don’t Have on our Humanness

It is conceivable that humanity will lose its characteristic humanness as the species’ ability to form collectives through emotional bonds erodes. Evolutionary history tells us that colonies have turned out to be one of the most sustainable life forms on earth known today. Living every-day lives together has had extraordinary survival value. If indeed our collective level humanness is being destroyed by our increasing bonding with digital assistants and other human-like machines, Homo sapiens is amidst making the most significant evolutionary turn in its history with unpredictable consequences: When digital assistants routinely express human emotions, they influence and may endanger our humanness.

5 Discussion

In this paper, we have raised the question: “How human should computer-based human-likeness appear?” and considered some long-term consequences of living with digital creatures that express human-like feelings. While we have been working on this paper over the years, our colleagues and reviewers in many disciplines have asked thought provoking questions about humanness, human-likeness and their boundaries.

Many have asked our views on where the boundary of human-like machines expressing feelings should be drawn. We could call for stopping all machines from expressing feelings because there is a possibility that this practice will harm the future of our species. If such danger exists, we should stop. We could also accept the reality that computers express feelings so frequently that nothing can be done. We don’t believe in blind acceptance: we have the right to know the long term consequences of living with machines that express feelings they don’t have on our future and the future of our children so we can make informed decisions about whether and/or when machines should and should not express feelings. It is high time we begin a multidisciplinary research effort and a robust discourse in this vitally important area.

We have also been asked about our views on people who express feelings they don’t have. Particularly in the domains of business and law we are used to insincerity. Established business communications practices dictate that employees express feelings they don’t necessarily feel on behalf of legal entities for reasons such as politeness (cf., Schultz, et al. 2000; Grice 1991). We consider legal entities to be like digital human-likeness in that they cannot feel. We join those who believe that it is human to feel the messages we are sending (Schultz, et al. 2000). If we are not expressing an actual person’s feelings such as our own or the CEO’s, we are reduced to reading scripts (Tansik and Smithe 1991; Victorino et al. 2008). When we express emotions on behalf of an organization and don’t feel them, we fail the felicity check. If frequent and long term, this behavior may endanger humanness in a way similar than machine expressed feelings. The important difference between humans and machines expressing feelings they don’t have is that we can (and should) skip the script to express our real feelings when the situation calls for a genuine response whereas machines cannot do that because they feel nothing. For a healthier future for Homo sapiens, we believe, it is important to develop business practices that do not require routinely lying about one’s emotions.

We also fully recognize that humans have a capacity for lying about emotions for good reasons and that too much honesty about how we feel can be destructive. Our point is, however, that we may be on a dangerous path when emotional dishonesty becomes the norm in our daily human and machine encounters because our genuine emotional connections are routinely denied and our humanness rests upon these. We cannot stop people or machines from expressing feelings they don’t have but we can call for a better understanding on how this practice is impacting genuine humanness and by providing healthy alternatives for those of us who want them. We are used to hearing machines say: “I am sorry” but we need to consider what meaningful is left for us to say when we really feel sorry. Are we turning formerly genuine expressions of how we feel into polite, meaningless platitudes? If so what language is left for us humans for our deeply felt genuine emotions? In order to prepare ourselves for sharing the planet with an increasing number of human-like creatures we need to be in charge of genuine human emotional life and language.

A third important question from one reviewer concerns another boundary between human beings and machines: “How much of a human being can be replaced with technology without losing humanness?” This is one more example of why we need a robust multidisciplinary research effort aimed at understanding our humanness better. Our immediate response lies in human emotions: As long as the human being whose original parts are being replaced by artificial ones has the ability to feel and connect with other humans through emotions, they are human.

In addition to questions, we have received valid criticisms relating to speech act theory and how it can be applied. The most obvious criticism of the theory is that it presumes it is possible to classify meaning. In response, we included Wittgenstein’s views on the philosophical futility of attempts to classify meaning because it is always contextual. Another criticism is that Searle’s speech act theory is agency-driven - led by the intentions of the speaker. Strong agency approaches have been criticized for underplaying the power, roles, structures, and functions of social interaction (Giddens 1986; Habermas 1984). Even if we accept the feasibility of classifying meaning and the strong role of agent intentionality, speech act theory has been criticized for assuming that speech acts have only one intention when multi-intentionality may actually be the norm (Allwood 1977). Within computer science, and in particular in AI, a major criticism of speech act theory is the inability of participants to observe each other’s beliefs and intentions (Field and Ramsay 2004). For example, speech acts do not easily help to deal with phenomena such as sarcasm, deception, or malicious intent.

We note these criticisms but are more concerned with how applicable the speech act theory is for modeling human interactions for human-likeness in general. In IS design, the theory has been criticized for assuming pre-defined communications patterns; underplaying action accountability; focusing language pragmatics over semantics; and inadequate addressing of the role of felicity conditions (Aakhus 2004; Goldkuhl 2003; Ågerfalk and Eriksson 2002). Ljungberg and Holm (1996) provide the most comprehensive criticism of applying speech act theory to IS design. They describe the problems of importing ideas from passive descriptive theory into active IS design. Speech act theory can be useful, but its shortcomings can lead to inflexible and controlling systems. In spite of these legitimate criticisms and apparent limitations of speech act theory, we found it useful. In particular, the definition of expressive speech acts as expressions of psychological states with felicity check of sincerity and validity claim of veracity, helped us identify a clear area of human-likeness where more research is warranted for better understanding of the long-term consequences of automation on genuine humanness.

6 Alternatives

Until we know more about the impact of living with machines that express feelings, they don’t have we should have a choice. The first step on that path is to inform. If we really care about our humanness, we should require that machines carry a warning: “A machine should never be a substitute for healthy human relationships. The long-term impact of living with computers that express feelings they can’t have is unknown.” We can also ask that a machine reveals itself in the beginning of every human encounter: “I have been programmed to express feelings to appear polite. In reality I am a machine and cannot feel.” The second step is to offer an alternative: “If you want to turn off my emotional expressions, please press or say X.” Our interactions with a machine can then follow a non-anthropomorphic style (Shneiderman 1993; Shneiderman and Plaisant 2010).

Shneiderman (1993) provides several good reasons why we should eliminate anthropomorphisms or human appearing characteristics from application interfaces designed for children:

Attributing intelligence, independent activity, free will, or knowledge to computers can mislead the reader or listener. The suggestion that computers can think, know, or understand may give children an erroneous model of how computers work and what their capacities are. It is important for children to have a clear sense of their own humanity. They need to know that they are different from computers, and that relationships with people are different from relationships with machines. Although children, and some adults, may be seduced by the anthropomorphized computer, eventually they seem to prefer the sense of mastery, internal locus of control, competence, and accomplishment that can come from understanding the computer's real abilities. (Shneiderman 1993; p. 333)

Schneiderman’s advice is mainly meant to protect our children but it eliminates expressive speech acts and thus is useful here. He characterizes anthropomorphized designs as “poor” and suggests alternative phrases that are “better” (Shneiderman 1993; p. 333). For a better design, we should avoid verbs such as “know”, “think”, and “understand,” which are poor choices and use better, more mechanical terms such as “process”, “print”, “compute”, “sort”, “store”, “search”, and “retrieve”. With respect to the software user, he suggests the designer should avoid a poor choice of verbs such as “ask”, “tell”, “speak to”, and “communicate with.” Better choices include terms such as “use”, “direct”, “operate”, “program”, and “control.” Schneiderman argues that an anthropomorphic computer that uses first person pronouns may be counterproductive because it deceives, misleads, and confuses. Human-like features may seem cute on the first encounter. By the second time they may already seem repetitive and on the third as an annoying distraction from the task. The alternative for the software designer is to focus on the user and use second person singular pronouns or to avoid pronouns altogether. An example of a poor choice for a phrase is: “I will begin the lesson when you press RETURN.” It is better to use: “You can begin the lesson by pressing RETURN” and best: “To begin the lesson, press RETURN” (Shneiderman 1993; p. 334).

We need to teach our children that the purpose of technology is to serve: “As computers and software become more powerful, they become more empowering. We can help our students gain a sense of control by attributing the power to the user” (Shneiderman 1993, p. 335). These are just some practical choices we have when we respond to the question: “How human should computer based human-likeness appear?” We present them here as a reminder that we have a choice on what kinds of machines we want to design, use and live with.

7 Conclusion

In this paper, we have opened a discussion on “How human should human-likeness appear?” from a specific perspective of the long-term impact of living with machines that express feelings on our collective level humanness. We have called for a robust multidisciplinary research effort on gaining a better understanding on this vitally important area. We will leave you with Krishnamurti’s (1984) words on what makes us human: “Our consciousness is what we are. What you think, what you feel, your fears, your pleasures, your anxieties and insecurity, your unhappiness, depression, love, pain, sorrow and the ultimate fear of death are the content of your consciousness; they are what you are—they are what makes you, the human being.” Feelings are the very substance of our humanness. Whether conscious as in this quote, unconscious or subconscious we are what we feel individually and collectively. What we feel today is founded on what generations before us have felt together and we will lay the emotional foundation for the future generations’ humanness.

To understand humanness and the consequences of living with human-likeness is a complex undertaking, which we suggest in this paper, can be taken on one small part at the time. We applied the speech act theory and the philosophy of humanness because together they help carve out feelings as an essential area of humanness that needs to be understood better in the context of human-likeness and also give us a compelling reason to understand the consequences of our wide spread preference to implement machines in our own image.

We have been scratching the surface of the fact that our current applications of technology to automating human beings have a huge hole where humanness should reside. Thus we should urgently attend to understanding the long-term implications of substituting the depths of genuine humanness with its shallow imitations whether acted out by humans who behave like machines or by computerized human-likeness. We are convinced that an improved understanding and care of how we assist humanity with computers will lead to a happier future for all human kind. Another thought we want to leave you with is by Winograd and Flores concerning AI: “In asking what computers can do, we are drawn into asking what people do with them, and in the end into addressing the fundamental questions of what it means to be human.” (Winograd and Flores 1986, p. 7).