Robotic Faciality: The Philosophy, Science and Art of Robot Faces


A key feature of the humanoid social robot is its face. A robot face is not simply a technical choice, as faces communicate identity, affect and interpersonal spatial relations, and can be key to perceptions about the virtuousness of the robot. To address the significance of the robot face we develop a transdisciplinary reading of faces that compares how science, art and philosophy offer critical knowledges to inform the design of social robotics. Science understands face perception as a physiological, neurological and psychological process that perceives identity, emotion and spatial relations. Art provides a diverse repertoire of stylised faces in visual culture that reiterates the role of likeness, affect and social space. Art presents faces as ethically loaded, such as the war face, the blessed face and the abstracted face. Philosophy proposes the influence of a machine of faciality that abstracts the face as black holes on a white wall, invoking subjectivity and significance. In the second half of the paper we use qualitative visual analysis to develop a classification of robot faces: realistic; symbolic; blank; tech; and screen. We argue that design choice has philosophical, aesthetic and ethical consequences, as people are highly sensitive to the appearance, behaviour and social space of people, robots and their faces.


Engineers and designers creating a social robot must inevitably face the challenge of building the robot’s face, as it is largely by reading the face that interactants will perceive the robot’s identity, character, style, communicative intent, emotion, and, indeed, its apparent virtuousness. In this paper we argue that the dominant epistemology that underpins human–robot interaction (HRI) research has limitations in its critical understandings of the role of faces in social robots. In spite of calls for more interdisciplinarity [1], the predominant methods in HRI remain empiricist and quantitative, with the expectation that research can and should create objective and reproducible scientific knowledge [2]. If robots are to move into the social realm, current approaches to HRI will benefit from openness to more diverse and transdisciplinary forms of knowledge that have sensitivity to social and cultural formations. The related field of Human Computer Interaction has, despite some controversy, benefited from such interdisciplinarity [3]. Opening up social robotics and human–robot interaction further to arts and humanities and social sciences may lose the apparent certainty of positivism, instrumentalism and disciplinary consensus, but they will gain a richer understanding of robots and the social worlds they inhabit. In this spirit, this article models such an approach with our analysis of robot faces.

Creating the face of a robot is a significant challenge because the human face is the most expressive part of the body, deeply involved in human social interaction. It has many aspects—physiological, neurological, psychological, cultural, conceptual, affective. aesthetic and ethical—so we can consider it as an assemblage [4]. That is, it is defined by the range of relations it establishes, and the transductions that occur between these components. The face of a social robot must establish itself as an assemblage that is capable of expression.

The face is conceptualised differently in different disciplines. In this article we will identify the contributions of different knowledge communities to a transdisciplinary understanding of the face. This approach allows us to draw on transverse connections between disciplines to show the way the face has three distinct modes of communication: (1) defining individual and group identity; (2) expressing emotions and affect; and (3) regulating interpersonal spatial relations. Each of these involve interwoven influences of biological and cultural factors. Therefore, we will argue that the appearance of the robot’s face can be key to perceptions of its virtuousness.

In the second half of this paper we analyse a convenience sample of robot faces using qualitative visual analysis to develop a typology of robot faces: realistic, symbolic, blank, mask, tech and screen. Each of these styles has aesthetic, practical and ethical values and trade-offs.

Faces: Universal or Socially Constructed?

While many disciplines recognise the central role of the face in social interaction, there is disagreement on the extent to which this capacity is universal, reducible to biology and therefore explainable by positivist and empiricist research. There seems to be some basis for seeing faces as a universal phenomenon. Evolutionary biologists have argued that the human capacity of face perception coevolved with the formation of the face itself from even before the emergence of hominids [5]. Developmental psychologists have found that from infancy people learn to use faces to ascertain gender, familiarity, attractiveness and age [6]. Reading faces seems to be innate inasmuch as it engages specialised brain regions. Psychologist Paul Ekman also makes a claim to universality in his analysis of facial expressions that he claims reveal six basic emotions: happiness, sadness, anger, surprise, fear, and disgust [7]. If all these are all stable universals, it would seem that these innate human features could be simply and transparently exploited for instrumental ends. The success of these innovations could be confirmed objectively by scientific experimentation. If a robot can be built to convincingly replicate the human, then this seems to suggest that human ethics could simply be applied again to the robot. A virtuous robot could be just like a virtuous human only mechanical.

However, while face perception clearly is expressed biologically, even these scientific approaches acknowledge that face perception is also a learnt skill, and subject to neuroplasticity, individual experience and cultural variation [6]. Anthropological research has documented how faces and face perception are culturally and historically particular [8]. Faces take on different meanings according to style, culture, social power, gender, race, sexuality and class. Collective knowledges about faces have not converged upon a scientific consensus, but have proliferated in a huge and heterogeneous literature in disciplines as diverse as social psychology [9], anatomy [10], visual arts [11], social robotics [12,13,14], affective computing [15], comic book theory [16], philosophy [4] and cultural theory [17,18,19]. Some of these knowledges of faces have emerged recently, such as computer face recognition, while others have been discredited, such as physiognomy—the practice of reading the moral character from facial features—which was widely accepted in the West in the 19th century [20]. Those with power have often mobilised faces as part of an explanation of social differences such as poverty, poor character, inferior race and criminality. Faces can be the basis for exclusion as much as they are points of connection. Practices of judging people by the racial characteristics of their faces became unacceptable after the second world war out of a desire for transparency in the rule of law and universal human rights and anti-racism. However, prejudice based on facial appearance continues to operate in everyday racist judgements. The emerging practice of automated facial recognition risks supporting unjust social sorting and automated decision-making to exclude or isolate individuals and groups, such as when deep learning models are not trained on sufficiently diverse data sets.

Empiricist Experimental Psychologists (EEP) and Interpretivist Cultural Anthropologists (ICA) [8] have fundamentally different understandings of faces and emotions. If emotions are universal, as the EEP argue, then it is possible for positivist science to bring to bear experimentation and deduction to reduce emotionality to a universal biologically-based phenomenon. In this case emotion would be a common human experience, irrespective of life situation or social position. If emotions and communications with faces are influenced by culture and social context, as argued by the ICA group, then more complex and contingent political, social and cultural questions complicate practice. This division has implications in social robotics, as EEP approaches are more common in engineering, where ICA approaches are more compatible with design, arts and humanities. Positivist results from EEPs may seem more immediately useful for researchers designing and evaluating robotic faces, such as choosing where sensors and actuators should be located. However, these universalising or ‘neutral’ approaches necessarily overlook cultural difference and efface questions such as the production of subjectivity, gender, ethnicity, class and social power that have implications for how robots might serve different groups in society [20].

Philosophy, Science and Art

This article analyses a selection of heterogeneous approaches to faces in three broad traditions theorised by Deleuze and Guattari: philosophy, science, and art [21]. They argue that each of these traditions produces knowledge, but of a different kind. Science creates scientific discourse and logical propositions based on planes of reference. Scientists in disciplines such as psychology, physiology, neurology and practices such as brain surgery create facts about face perception through social practices such as observation, use of technological apparatuses, measurement, logical induction, hypothesis testing and experimentation. As science and technology scholar Bruno Latour has argued, though, making facts is always itself socially situated and involves recruiting non-human actors such as instruments and other technologies [22]. Art extracts faces from the everyday and returns them as stylised images, objects and performances—generating percepts and affects—virtualities that foster intense sensations, feelings, affects and emotions. The face is arguably the most powerful subject matter across the visual arts in creating likeness, iconicity and abstraction [16], and conveying meanings, and generating ethical and moral resources. Finally, Deleuze and Guattari see philosophy as a discipline that creates and evaluates concepts that exist in relation to a plane of immanence and to related constellations of other concepts [21]. One of their concepts is that the face is an assemblage produced by a machine of faciality.

The Philosophical Concept of Faciality

Deleuze and Guattari’s concept of faciality identifies a culturally specific way that faces operate in Western culture [4]. Their conceptualisation of the face begins with the observation that faces are characterised by patterns in the regions of the face that reflect and absorb light. The dark areas—eyes, mouths, folded surfaces—serve to suggest the unfathomable depths of subjective experience, and the interior life of the person with the face. The lighter regions communicate intentions, affections, offers and demands. The interrelationship between the two constitutes the distinctiveness of faces or images of faces. The face is the avatar through which identity and communication are mobilised, and subjectivity is constituted in relation to an historical milieu, social situation and power relations. Faciality is an ‘abstract machine’ that comes into play whenever anyone or anything encounters something that functions as a face, creating both meaning and identity, emotion and subjectivity, surface and depth, signifiance and subjectification. The face is a multiplicity rather than a unity, defined by its relations of attachment with other subjects and part subjects rather than having an essentialised and stable identity. Deleuze and Guattari argue that faciality works through two different but closely interrelated systems: the white wall and the black hole.

Signifiance is never without a white wall upon which it inscribes its signs and redundancies. Subjectification is never without a black hole in which it lodges its consciousness, passion, and redundancies. [15: 167]

The face is an absolute deterritorialisation from the body, so it is experienced in completely different ways from the head. The head is part of the body, constituting volumes and cavities, and so expresses itself in a very different manner through gestures and comportment. The face is a surface of a certain shape—square, triangular—etched by lines, imprinted with holes, or constituted in dark regions upon which light surfaces intrude. We argue that many social robots are informed by the abstract machine of faciality, inviting interactants to garner meanings, identity and social relations from their appearance.

Social robotics deterritorialises patterns of human face-to-face social interactions and reterritorialises them in human–robot interactions [4]. A social robot such as Pepper has sleek and pleasant facial features (androgynous and gloss white), and abstracted facial expressions created simply by moving its head and changing lights in its eyes. This is supplemented by body movements, sound, voice and a screen to interact with people. Its movements can be scripted, or it can use forms of artificial intelligence for autonomous action and interaction. In interactive mode it employs sensors to capture the voices and faces of human interactants (such as establishing eye contact) to feed it back into its own behaviour. Because these translations have some efficacy, robots like Pepper have come to be judged as quasi-social agents—standards that are different from other technological artefacts [23]. Gonsior et al. claim that a robotic face can foster genuinely empathic relationships with humans [24], and Malle et al. argue that a robot’s facial features can influence people’s evaluations of its ‘moral’ decisions, but not in the same way people judge human decisions [25]. These aesthetic and ethical assessments focus not on a robot’s essential character, but on how and what it can perform.

Understandings of Face Perception in the Sciences

Like the white wall/black hole faciality machine, scientific accounts of face perception have pointed to the independence of perception of identity, emotion and spatiality. This section provides a brief cultural history of the development of these scientific understandings of face perception [26], stressing the often accidental, socially situated and technologically mediated production of this knowledge. Among the first observations about face perception came from doctors simply observing people with brain injuries or ‘abnormalities’ to get insight into ‘normal’ functioning. For example, people with ‘face blindness’, or prosopagnosia find it very hard to discriminate between faces and can’t reliably recognise even people they know well. On the other hand, many people with autism, psychopathy or borderline personality disorder [27] have difficulties in recognising facial expressions and empathising with the emotional states of others.

The modern understanding of facial expressions and the emotions associated with them began not only in anatomy and neurology, but also in experiments with photography and electricity. Guillaume-Benjamin Duchenne, a 19th century French neurologist, conducted electrophysiological experimentations by probing subjects’ faces with electrical wires to stimulate and animate their facial muscles [28]. He would photographically capture facial expressions which are otherwise difficult for people to simulate. He explained that his approach could capture involuntary and fleeting movements that otherwise were motivated only by the soul. He claimed to pinpoint the facial muscles associated with attention, aggression, pain, lasciviousness, sadness, weeping, whimpering, fright and terror. Duchenne’s work influenced Charles Darwin’s [29] Expression of the emotions in man and animal, and has been rediscovered since the 1980s [18]. Interestingly, Duchenne saw his work as valuable not only for scientific understanding, but also for use in painting and sculpture, which would benefit from his ‘orthography of facial expression in movement’ [28].

Another way scientists have approached faces has been to measure and even stimulate electrical activity in human and animal brains by placing electrodes on the head, or even directly on the brain. Neurosurgeons performing brain surgery for treating epilepsy, for example, have measured the activity in different brain regions to identify the nerves activated when the patient recognises a face [26]. A Stanford neurologist even tried directly stimulating that brain region, so that the conscious patient reported seeing a dramatic transformation in the appearance of the doctor’s face. The patient reported ‘Your nose got saggy, went to the left. You almost looked like somebody I’d seen before, somebody different.’ [30] There is something quite material and clinical about this understanding of face perception. It calls attention to the contingency and fallibility of human perception. In a different field altogether it is apparent that face perception has a neurochemical aspect, with oxytocin being implicated in face processing [31]. The combination of these observations suggests that the capacity to recognise peoples’ identity and the ability to sense what they are feeling are associated with different regions and operations of the brain.

The most detailed knowledge about face perception, though, has come from functional brain imaging—using an fMRI to measure blood flows associated with brain activities while people are observing faces. This confirms that face perception involves activity across multiple regions of the brain [26]. This approach illustrates that neural pathways involving the recognition of individuals are quite distinct from those that recognise facial expressions. This seems to echo Deleuze et al.’s [4] black hole/white wall system, as well correlating with the findings from direct brain stimulation. fMRI shows that recognising individuals proceeds through a stage of initial recognition, followed by establishing spatiotemporal associations and personal biography, and finally, remembering that person’s name. There seems to be an almost unlimited capacity to recognise faces, only slightly affected by facial expression, lighting or angle of view. However, personally familiar faces are processed in a more robust manner than the faces of strangers—a brain phenomenon that might be traced to social relations such as xenophobia and celebrity [26].

Recognising facial expressions (rather than identifying faces), on the other hand, is independent from the neural mechanisms of recognising identity. It exhibits brain activity more closely associated with emotion [26]. Observing facial expressions of another person often involves identifying or empathising with them by recognising what their face reveals about their internal life. If you see the facial expression of somebody grimacing in pain, or expressing delighted surprise, you are likely to respond to these expressions with your own feelings, particularly in the context of other social cues. Another region of the brain, more associated with spatial awareness, comes into play in social contexts for understanding other people’s direction of attention.

In psychology, among the most influential recent advocates of the biological base of facial expressions and emotions is Ekman [7] who identified six basic emotions:

anger, fear, sadness, enjoyment, disgust, and surprise. I will also raise the possibility that contempt, shame, guilt, embarrassment, and awe may also be found…[24: p 170]

Many social robotics groups have developed facial robots that can perform these facial expressions and found that participants can reliably recognise the associated emotions. For example, Bennett and Šabanović [32] use Ekman’s typology of six basic emotions in their design of an expressive robotic face. They sought to ‘create culture-neutral models of robots and affective interaction’ (p272). Reyes et al. [33] developed a minimalist face they called GolemX-1, and showed that the most recognisable emotion is anger. One might ask whether the experience of empathy in facing an angry face is to share that anger, or, depending on context, to feel fear, camaraderie or even amusement. In another experiment, Mirnig et al. [14] conducted an online survey after showing participants short movie clips of expressive robotic heads and asking them to recognise the expressions expressed, using the control of clips of humans with the same four emotions. He found that the expression of ‘surprise’ was interpreted in both negative and positive ways. This study acknowledges the importance of situational context in interpreting emotions. Therefore emotional expressions are likely to be more effectively communicated when situated with a context, narrative, staging and other metacommunicative framing [34].

On the other side of the interaction, roboticists design robots themselves to sense and interpret human faces. Drawing on work in biometric facial recognition, roboticists have built robots that can recognise individuals and detect emotions. For example, Hanheide et al. designed a robot for a particular social situation in which it would be introduced to guests at a party [35]. The robot first detected the face, performed feature extraction, and then applied classifications and stored a record for future reference. The goal was to distinguish between the guests and to behave to each differently. In designing this situation, though, the facial recognition problem was inflected through the need to follow social codes and etiquette.

Face recognition and emotion detection allow computers to create statistical measures of perceived identity and affect. Microsoft Azure offers ‘cognitive services’ that they claim can discriminate face attributes including ‘age, emotion, gender, pose, smile and facial hair, along with 27 landmarks for each face in the image’ [36]. Expressed in confidence measures, the algorithms look for similarities among multiple images to identify which are most likely to be the same person. Microsoft also claim to be able to identify emotions ‘such as anger, contempt, disgust, fear, happiness, neutral, sadness and surprise’ [36]. However, what is not clear is how these data should be mobilised in actual interactions between people and machines. Similar questions might be asked about affect generation, such as when a robot changes facial expression to simulate an interior life [37]. Operating primarily with reference to algorithmic functions, these approaches are not necessarily sensitive to the social, aesthetic and ethical implications of these technical functions.

Faces in Art

If scientific approaches can explain some of the biological bases for facial expression and perception, they tend to lack attention to the phenomenological, aesthetic, axiological and cultural dimensions of facial identity and expression. It is in these that perceptions of qualities of virtue are mobilised. Therefore, we argue that the work in the arts, humanities and social sciences becomes critical for understanding how faces work socially. For Deleuze and Guattari [21], art creates knowledge about faces by creating ‘blocs of sensation’ that generate percepts and affects that invoke the human face. That is, art stimulates the senses in a way that affects those who witness it—in this case, the sensations and feelings that provide the effect of seeing faces. The face is among the most common subjects of visual arts—portraits, busts, cinematic close-ups and so on—that evoke recognition, expression and emotional connection. Where biological sciences tend to seek universal knowledges about faces, art practice and history explore stylisations of faces in cultural and geographical locations, historical moments, genres, art movements and individual artists. Beyond the formal art world there are also many vernacular and professional arts of faces in waxworks, caricatures, comics, advertising, selfies, celebrity and so on. Some practices put faces into movement: puppetry, ventriloquism, automata, cinema, hand-drawn animation, computer animation and computer games. Each of these practices is marked by the properties of its materials, the processes of its production and exhibition, and its institutional locations. Each is informed by the abstract machine of faciality, whether by subtractive operations of carving holes and lines in marble, additive layering of paint on canvas, manipulated impressions in clay, or photographically captured or computer-generated images displayed on screens.

Artefactual faces emerge from particular cultural and historical milieux and are often associated with ethical judgements. Mid-twentieth century anthropologists of art documented the decorated helmets of the indigenous people of the Pacific Northwest of North America that featured grotesque faces—angry, battle-scarred and masculine—to intimidate enemies in war, or to embolden the aggressor [38]. The war face is also mobilised in the paintings on the nose cones of bombers in the second world war, again with sharp, exposed teeth and aggressively open alert eyes. These war faces might actually signify the virtues of heroism and commitment to the community, but the war faces in Picasso’s ‘Guernica’ invert the meaning to present the contorted and horrified faces of humans and animals in battle, the virtuous innocent victims of atrocity.

In another context, mediaeval sculptors portrayed the blessed face—a face with little identity or expression—giving priority to performances of religious virtue. The face lacks the clear likeness of any individual, lacks any emotional expression that would signify passion or sin, and lacks a sense that it is paying attention to anything worldly. The blessed face in these scenes is contrasted with the writhing bodies and contorted faces of sinners, and the laughing face of the devil.

In the Renaissance, with a co-emergence of early modern science and art, attention to anatomical detail as well as refined artistic technique framed the face as biologically, aesthetically and spiritually sublime. Da Vinci’s artistic and anatomical studies of the human body include studies of the face and representations of beauty and ugliness, and the patterns of differences that constitute specific identity [10]. His notebooks break down the face in exhaustive detail into distinctive components and their mathematical relations in ways that anticipate computer face recognition.

Noses are of 10 types straight, bulbous, hollow, prominent, above or below the middle, aquitaine, regular, flat, round, or pointed. These hold good as to profile. In full face there are of 11 types. These are equal, take in the middle, train in the middle, with the tip thick, and at the root narrow or narrow at the tip and wide at the root full stop with nostrils wide or narrow high or low and the opening wide or hidden by the point [10]

Sculpture and painting in the 20th century sought a reality beyond a life-like physical representation of the face. Rather artists sought to capture abstracted faces as they are experienced with emotion and in social interaction. This involved a constant negotiation between grasping detail and the whole that aimed towards a reality beyond mere anatomical representation. For example, Picasso’s cubist paintings are well known for depicting multiple perspectives at once. Archipenko’s sculpture ‘the head’ (1913), is broken pieces and details of a face assembled as a head. Henri Laurens work on paper, ‘Man with pipe’ (1919), again includes detailed planes of the head in layered textures that together present the whole of the portrait. Such works aimed at depicting the whole and detail as it is experienced in everyday facial interaction. Although these works may not appear biologically realistic, they aim to create the impression of gestures and reveal key aspects of the face.

Giacometti’s bronze sculpture ‘The cube’ (1932) sought to take this a step further focusing his work on the everyday social interaction we experience in face to face conversation. The cube (1932) is in fact not a cube, but an irregular polyhedron with 12 faces. The surface of a cube or polyhedron is often described metaphorically as a ‘face’. Giacometti describes the meaning of the sculpture as something that is buried mentally, however he does refer to the object as a head [40]. The artist likens the struggle of realistic representation to a facial interaction explaining:

if I look at you face to face, I forget the profile, if I look at the profile, I forget the face. Everything becomes discontinuous. That is the fact of the matter. I never again succeed in grasping the whole. [41]

Giacometti’s work focuses on the practices of looking related to faciality. The face here is a representation of interactive experience; however, the face also appears mute without any indication of eyes or detailed features. The faces of ‘The cube’ are blank, so the distinctions between white walls and black holes are absent. Because of this absence, viewers see sculpture firstly as part of a ‘body without organs’ [4], while facial identity and expression are ambiguous and available to the imagination of the viewer. This is a concept, then, of the body, not one of part objects but of differential speeds ‘where everything becomes discontinuous’ (Giacometti in [40]) because of the movement in facial social interactions. Giacometti investigates social interaction by abstracting the form of the face, however the concept of abstract machine depends on an abstraction of the face as separate to the head and body.

The practices of representation, abstraction and ethics in art provides aesthetic resources that can be referenced and translated into the practice of designing robots. Art criticism, cultural studies and visual analysis provide concepts, vocabularies and critical practices with which to evaluate robot faces without resorting to positivist experimentation. This contributes to technological advancement by situating engineering and design practice in cultural and historical context.

A Visual Analysis of Robot Faces

The first part of this paper touched on epistemologically disparate but peculiarly connected philosophical, scientific and artistic knowledges of the face. Shaped by this knowledge we continue here with a visual analysis of our sample of robot faces. As part of this analysis we can ask how values such as virtue might be ascribed to a robot. There is no universally agreed upon classification for robotic faces. Kishi et al. [13] identify two approaches to robotic heads: the humanlike and the symbolic. The humanlike has skin animated by actuators under the skin that deform the face to simulate expressions. Symbolic faces animate features such as eyebrows, against a solid background. They argue that ‘facial expressions are essential’. Fong et al.’s [42] classification of robots includes anthropomorphic, zoomorphic, caricatured and functional robots. They argue that robots can convey information, including their inner states, more intuitively to humans through facial expressions than words’ [20: 572]. In another discipline altogether, McCloud’s [16] classification of drawn comic book characters is also particularly useful, distinguishing between realistic, iconic and abstract drawings of faces. Realistic comic book figures are drawn in close detail with shading and lighting effects; symbolic characters are simplified and iconic. Abstracted characters break into non-representational shapes and primitives. These three stylistic choices are oriented respectively towards perceived reality, the iconicity of written language, and the shapes, lines and colours of the picture plane. Another useful and related approach comes from visual social semiotics [43], which defines ‘modality’ as the degree of reality with which something is signified. A photograph has a high degree of modality and creates the impression of transparency and the authority of truthfulness. A child’s drawing has lower modality and is less trustworthy or believable. Bearing these concepts in mind, we will propose a provisional classification that includes humanlike, blank, tech, symbolic, and screen faces.

Using Visual Analysis to Evaluate Robot Faces

Our research methodology in this section is visual analysis, an approach that is sensitive to artistic stylisation, but also to the culturally specific meanings of images and objects. We take a convenience sample of still images and videos of the faces of both fictional and actual social robots, as well as observations of robots in the field (Sydney and Brisbane, Australia; Seoul, Korea; Kobe, Japan and Madrid, Spain). Our visual analysis approach combines social semiotics, art criticism, cultural studies and visual culture. In this sample we can find more or less clear markers of individual or collective identity, affect and emotional expression, and intersubjective spatial relations. We also analyse the philosophical and conceptual implications of each category of robot face, and evaluate the way that virtue is mobilised.

We include both actual social robots in engineering and fictional robots in popular culture in our sample because fictional robots provide archetypes and templates for the design of and experience with actual social robots [44]. Even if these impressions mislead people about social robots’ actual current capabilities there is no doubt about their influence [45]. Fictional robots contribute to the cultural imaginary around robots and to the discourse about robot ethics. Some texts present melodramatic characterisations: virtuous robots such as Wall-E, or Baymax in Big Hero 6, and bad robots such as Skynet’s Terminators, or Megatron in Transformers. Other texts, such as Her (2013), Ex Machina (2014), Humans (2015) and Westworld (2016) grant robots an internal life and their own ethical values. While social robotics may borrow from popular culture, we recognise that roboticists must negotiate enormous engineering and design challenges of creating fully functioning self-contained interactive artefacts with sensors, actuators and processors. The capabilities of contemporary humanoid robots are only very loosely human-like. However, social robots will ultimately be judged by interactants in terms of their performance as convincing, entertaining, communicative and even virtuous social actors.

From our sample of fictional and engineered robots we identified some broad but often overlapping categories of robot faces that we will analyse. There are the naturalistic faces on humanlike robots or androids like the realistic Geminoid F by Hiroshi Ishiguro (Fig. 1), or the fictional replicants in Blade Runner (Fig. 2) that seem to challenge the distinction between machine and human: the realistic faces. Other robots have symbolic faces with moving facial features that re-arrange themselves to form expressions to express emotional states such as MIT Media Lab’s Kismet [46], Nexi [47] (Fig. 3), MiRAE [12]. There are robots with shiny, minimalistic and futuristic faces without features or expression such as the Honda’s Asimo (Fig. 4), or the robot in the film Robot and Frankblank faces. Robots like C3PO, or the actual robots Robovie R3, Nao and Pepper (Fig. 5) express themselves only in basic movements of lights, head and body, so can be considered mask faces. There are robots whose abstracted visages are made up of a bricolage of electronic and machine components such as Willow Garage’s real PR2 (Fig. 6), the fictional Robby the Robot (Fig. 7), or Nam June Paik and Shuya Abe’s artwork Robot K-456 (1964)—the tech faces. Some robots even lack heads or obvious faces but still appear lively, such as Star Wars’ R2D2 or the hospital porter robot Tug. Some robot faces appear on as animated projections or images on LCD panels, such as Stelarc’s Prosthetic Head (2002) (Fig. 8), FURO by Korean company Future Robot (Fig. 9) and the toy Cozmo: the screen faces.

Fig. 1

Geminoid F by Hiroshi Ishiguru. An example of the realistic face. Image: Chris Chesher

Fig. 2

Roy Batty. Blade Runner

Our classification of robot types emerged from observing our sample of robot faces, but we found that robots can belong simultaneously in multiple categories. We also considered how these types might sit within McCloud’s distinctions of realistic, iconic and abstract faces. We arranged this information into a table (Table 1).

Table 1 Robot face categories, their implications, examples and features

Categories of Robot Faces

Realistic Face

The realistic or humanlike face is anthropomorphic and in high modality. To create it in a social robot requires some degree of anatomical knowledge, engineering skills and some stagecraft. In philosophical terms, it challenges the viewer to imagine a world in which robots might become indistinguishable from actual humans. Humanlike robots draw human exceptionalism into question [48], and also allow people to attribute to them human values such as virtuousness. In this boundary-challenging they can create feelings of uncanniness or even horror. They blur the perceptual and conceptual boundaries between the technological and the human, and suggest an ethics of direct equivalence, where the robot’s rights and responsibilities should be evaluated in the same terms as people; hence the myth of robotic rights. Realistic robots are necessarily marked by gender and ethnicity, as is apparent in the female robots Bina48 (African American), Geminoid F (Japanese) (Fig. 1) and Sophia (white European).

Humanlike robots are a mainstay of science fiction films and television, creating narratives in worlds in which the natural place of the human is threatened by the emergence of artificial humans. Hollywood almost always presents humanist narratives. In the TV series Humans (2015, 2017, 2018), and in Westworld (2016, 2018), the narrative centres on the prospect that the technologically produced, deterministic and instrumentalised robot might somehow achieve sentience and human-like agency. On the other hand, Westworld suggests that human lives may themselves be pre-determined. Robots often desire to become more human, such as David in AI: Artificial Intelligence (2001), Adrian Martin in Bicentennial Man (1999) and the replicants in Blade Runner (1982).

Actual humanlike robots have been developed in a range of institutional contexts: entertainment, the military, academia and the sex industry. Disney was a pioneer in animatronics, debuting an Abraham Lincoln audio-animatronic figure for Disneyland that performed in a show called ‘Great moments with Mr Lincoln’ at the 1964 World’s Fair Hall in Chicago. The Animatronic Abe, with suitable gravitas, was powered by hydraulics, and programmed with audio tape, and could raise its eyebrows and move its lips and mouth in accompaniment to an actor’s speaking voice. In this way animatronics were an extension of the practice of cinematic animation. Abe is a celebrated symbol of political virtue in his role in the abolition of slavery. A more recent and more sophisticated Abe has been developed for Disney Garner Holt Productions, based on their work on humanoid robots for training soldiers in the US military. Their Abraham Lincoln animatronic has 45 actuators in the head, supporting much more expressive face, capable of hyper-realistic performances.

Among other well-known creators of ‘humanlike’ [13] android faces, outside the entertainment industry, is Hiroshi Ishiguro at Osaka University. He is explicit in his goal of creating realistic robots to explore the boundaries between the living and the non-living. He has created ‘Geminoid’ androids modelled on himself and others, employing multiple pneumatic actuators under the silicone skin, which becomes deformed to generate somewhat naturalistic facial expressions. Along with deliberate movements, these robots exhibit apparently involuntary twitches and blinking that suggest affect and lived experience. Ishiguro and his collaborators have even claimed that social robots establish a new ontological category for non-living things that can be attributed ‘mental states, sociality and… moral regard’ [49].

Another humanlike media darling is Sophia, a humanoid torso robot developed by Hanson robotics, a privately held, venture-capital-backed company based in Texas. It is capable of displaying 50 pre-programmed facial expressions that represent human emotional states. Her developer Hanson describes her as:

a tool for science in studying human-to-human interaction, and she’s now a platform for allowing AI to express natural-like human emotional state(s), which is something we’re developing. True emotive AI.

Sophia is able to imitate human gestures and facial expressions as well as answer pre-prepared questions on predefined topics. Sophia’s face is constructed of Frubber—facial rubber—which designers claim to make her skin appear supple and humanlike [50]. Sophia features holes in the white wall in the details of her blinking, wrinkled nose. Her eyes appear to be interacting with facial and head movements. Her realness however is also balanced by the exposed ‘tech head’ workings at the back of her head. She lacks hair and becomes slightly more iconic and abstract, and in lower modality.

‘Human-like’ robots have also emerged in another commodity form as the sex doll [51]. ‘Harmony’ by RealBotix features an ‘animagnetic head system’ but has only 12 actuators in her face. Harmony’s designers aimed to detail the white spaces in her face as much as possible. For example, we observe blinking and soft subtle eyebrow movement. This contrasts with her mouth movement that is jarring and stuttered. The lack of expressiveness in Harmony’s face could be likened to human facial movement after botox (Fig. 3).

Fig. 3

Nexi, MIT. An example of the symbolic face. “Nexi” by failing_angel is licensed under CC BY-NC-SA 2.0

Symbolic Face

Another strategy for designing an expressive robot, as discussed earlier, is to abstract its face into simple components (moving shapes, lines and colours), and to invoke the ideal form of facial expressions. In McCloud’s categorisation this is a more iconic face. Drawing on scientific principles such as Ekman’s [7] universal classification of facial expressions, which supposedly align directly with universally felt emotions, these robots invoke the spatial relationships of moving black holes distributed across simplified (or even absent) white walls. In this way the transcendent mathematical relations that form the expressive face supersede the fleshiness and implied mortality of the humanlike robot. With reduced representational modality it can invoke a higher order universal truth in the optimal expressive shape and movement that transcend individual or collective differences. It abandons referencing any actual human face in favour of modelling icons of emotional variability. Many symbolic faces are not marked by ethnicity but tend to suggest or even exaggerate unmarked whiteness.

The interactive robot head, Kismet, created by Cynthia Breazeal at the MIT Media Lab in the 1990s, was one of the most influential early symbolic social robots designed to recognise and simulate emotions. Kismet was a robotic face with four key expressive moving elements: eyes, eyebrows, lips and ears. Breazeal’s aim was to simulate a relationship between a child-like robot and a human caregiver through sensing faces and sounds and moving the head and arranging facial elements to suggest emotional expression. The robot’s shifting gaze and moving head change the spatial relations between participant and robot [52]. The design was deliberately abstracted from a realistic face. For example, the ear movements seemed to mimic the movements of human hands or arms in social interaction. Modelled iconically as an infant, Kismet had an innocence prior to assessments of virtue.

Kismet was followed at MIT by the Nexi mobile, social, dextrous robot (Fig. 3). Its face had black holes for animated, blinking eyes and a hinged mouth, and eyebrows that moved across the white wall of its forehead. Nexi’s face was closer to a childlike human face but revealed exposed electronics characteristic of the tech face. For Nexi the voice and movement of facial features were most important in interactive expression, rather than morphological realism. In demo videos Nexi uses the tilt of the head to make the interaction more engaging and life-like with very simple eye and mouth movements, mimicking rather than duplicating human gesture, again like animation. Nexi’s voice is important to the social interaction because its smoothness suggests a fluid interaction, suggesting emotion through intonation. Again, emotional facial performances become more compelling when combined with stagecraft, interaction and narrative.

Blank Face and Mask Face

While a robot with a blank face lacks distinguishable facial features its cool presence allows the user to project their own imagined face onto it. Like Giacometti’s cube, it relies on the participant or audience to complete the face. In its absence, the shape and movement of the head and body expresses affect and identity. Honda’s flagship robot Asimo (Fig. 4) introduced in 2000, had a blank face that was hidden behind a reflective black visor. The astronaut-like figure foregrounded the robot’s movement, while suggesting space age innovation. This allowed viewers to make their own readings of identity and affective expression. In later versions there was some concession towards offering a tech face with faintly visible wide eyes and broad smile.

Fig. 4

Honda’s Asimo is an example of a blank face by Franck V. on Unsplash

Closely related to the blank face is the mask face. In cinema, C3PO in Star Wars (1977) has a mask face fixed in an expression of alarm, but moves his body and speaks in a frantic and neurotic manner to establish character and emotion. Iron Man, who is human on the inside, overcompensates for his inhuman appearance with a wise-cracking voice performance. Softbank’s Pepper (2014) (Fig. 5) has large prominent eyes that change colour to indicate it is listening (blue), processing (green) or not listening (white or pale blue). It responds to sound cues, moves its head and attempts to hold eye contact, but has otherwise minimal other facial features [53]. Other robots are even simpler. For example, Jimmy the robot is a 3D printable knee-high, open source walking companion created by Intel [54]. Jimmy’s face is between abstract and iconic in McCloud’s triangle of facial representation and has no changing facial expressions. The face is suggested by the head shape and inset eyes. Its awkward and cartoonish movements are important here as the moving head, arms and hands engage in interaction with the viewer. In fact, his place on McCloud’s triangle is ambiguous, since even though he is not as realistic as the Geminoid or Harmony, Jimmy’s body movement makes the robot seem more animated.

Fig. 5

Pepper by Softbank has a mask face. Image: Chris Chesher

Tech Face

The tech face features electromechanical components arranged to form or suggest a face. This style of robot face makes a fetish of technology come to life. The classic robot face unashamedly presents itself as a technological entity, with the abstracted parts that only roughly reference the human face. Like a technological Frankenstein, it is a sublime bricolage of heterogeneous components. Westinghouse’s Elektro (1937) had an under-sized bronze head with welded features and a hole for a mouth that could speak and smoke. Robby the Robot from Forbidden Planet (1956) (Fig. 7) provided the archetype of the tech head, reprised in Lost in Space and in a million toy robots. Willow Garage’s PR-2 robot (2010) (Fig. 6) is a standard robotic platform for research. Its face is suggested by multiple lenses in the position of eyes, and a tilting planar laser scanner for a mouth. While this configuration is largely instrumental it also can’t help being read as a face.

Fig. 6

PR2 by Willow Garage. Example of tech face. Image courtesy of Willow Garage

Fig. 7

Robbie the Robot (model: the Computer Museum, Madrid) from Forbidden Planet. Image: Chris Chesher

Some of the most affecting robot faces also appear in art. Nam June Paik and Shuya Abe’s humorous and ironic artwork Robot K-456 (1964) was comprised of precariously assembled found parts of gadgets. Its eyes were spinning toy fans, and its mouth was a round, black loudspeaker playing a recording of John F Keneddy’s inaugural address [55]. The artists took the robot into the streets of New York and documented peoples’ reactions. In another context there is a genre of internet memes that invokes a hybrid of the realistic face and the tech face by using PhotoShop to insert robotic componentry into the faces of attractive models or celebrities.

Another quite confronting artwork, ‘Female figure’ (2014) by Jordan Wolfson pushes the tech face/realistic face/mask face into an ambiguous expression, challenging dominant values of femininity. A hypersexualised female figure dances, moving its arms from exposed metal joints, wiggling her hips while being attached to a large horizontal silver pole through her abdomen into a full-length mirror. The figure wears white gloves, a white leotard, thigh-high boots and a long blond wig, but the top of her face is obscured by a monstrous green mask. The robot is soiled with black dirt or oil in areas over the body. This female figure is disconcerting, indeed more disturbingly present than the Geminoid. The stereotypical femininity of the long blonde hair, closely synchronised red lips and writhing body contrast with a rough masculine voice and the grotesque green mask. The dirt on the robotic figure introduces a griminess of stereotypical representations and its limitations that is achieved by deteriorating a robotic figure that we associate with clean lines of technology. Essentially ‘female figure’ plays with iconography and representation to comment on stereotypical representation. The movements, particularly of the arms, are very important to the language here and it is through these signs that the robotic interaction with human gaze takes place (Figs. 8, 9).

Fig. 8

Stelarc: an example of the screen face in art. “NIME2010” by Kirsty Komuso is licensed under CC BY-NC-ND 2.0

Fig. 9

FURO by FutureRobot. An example of the screen face. Jason Woodhead CC attribution licence

Screen Face

Robots with moving animated faces on a screen have the full repertoire of moving image technology but lack the materiality of physical engineering. The Korean robot FURO from FutureRobot (Fig. 9) features a head with an animated face on a screen behind the visor of the robot’s helmet, with its facial expressions synchronised with the robot’s speech and head movements. This cartoon face draws on Asian-style animation and cute culture, with large eyes and animated mouth. The robot head and face can track and follow the movement of the robot’s user, while much of the interaction is through the large screen underneath the face.

The artist Stelarc exhibited a work called Prosthetic head (Fig. 8) which is sometimes mounted on a robot arm. The audience interacts with a digital replica of the artist’s face in conversation using a mouse and keyboard. According to Stelarc’s website the 3D avatar head, which resembles the artist features

real time lip-synching, speech synthesis and facial expressions. The head nods, head tilts and head turns as well as changing eye gaze contribute to the personality of the agent and the non-verbal cues it can provide. It is described as a conversational system which can be said to be only as intelligent as the person who is interrogating it. [56]

Like many commercial animated chatbots, the face provides a visual correlative to the disembodied voice, invoking the abstract machine of faciality to provide a sense of presence to the inhuman chat. Stelarc is playful in constituting himself as a self-reflexive screen/mechanical persona with an arm, a face, a voice and a sense of humour.

The screen face is also familiar from animation and computer games. The original Macintosh start-up screen featured an iconic smiling computer. Doom featured the weathered face the player avatar underneath the first-person view of that game. However, designers of screen faces need to avoid zooming, cutting or changing lighting, unless it is for humorous effect. The screen face needs to sustain the object permanence of an animated face that would be disturbed by the shot-countershot structure of classic cinema or the intrusion of a cursor.


Drawing from our visual analysis of these robot types, and making transverse connections between philosophy, science and art we can propose the following provocations about robot faces as assemblages:

  1. 1.

    A robot’s face establishes a robot’s identity as a quasi-subject;

  2. 2.

    A robot’s face and body perform affective and emotional expression;

  3. 3.

    A robot’s face marks (or unmarks) distinct stylistic, technical, gendered and racialised identities that situate it in a cultural and historical milieu;

  4. 4.

    Robots can be read simultaneously as subjects, gadgets, material artefacts, artistic works, performers and parts of a technological system;

  5. 5.

    Human–robot interaction can be experienced as a collaborative performance with framing of context, narrative, sound, speech and image.

Each of the categories of robot face we have identified establishes identity, expression, spatial relations and ethics in a different way. A humanlike robot proposes an imaginary equivalence between human and robot, and therefore the possibility of judging it ethically in human terms. The symbolic face, which performs facial expressions as abstracted animated primitives is not realistic but maps what Ekman [7] would claim are universal emotions. However, an expression of surprise is no universal, as it will be read on the basis of its performance, contexts and associated narratives. The blank face underplays identity and emotion, privileging the robot’s body, and leaving it to viewers to imagine a face. The mask face fixes the identity and affective orientation of the robot. The tech face makes a fetish of technology itself, often in a utopian or dystopian frame. The screen face returns to a technological lineage of screen animation traceable to the magic lantern in the 1860s but loses the materiality of a physical face.

Our visual analysis suggests that robot faces of different types mobilise different repertoires of performance of faciality. Faces are always about individual and group identity—such as gender and race—that can be effaced or exaggerated in a robot. Robots are sometimes positioned in diminutive terms—as objects of the gaze, loyal assistants, sex objects or even freak shows—that model inequitable social relations. If robots are to find places as educators, entertainers, assistants, carers or other quasi-social roles, there are more than engineering questions in play. It is in the interplay, and not necessarily cooperation, between art, science and philosophy that the places for social robots will be negotiated.


  1. 1.

    Dautenhahn K (2007) Methodology and themes of human-robot interaction: a growing research field. Int J Adv Robot Syst 4:103–108

    Google Scholar 

  2. 2.

    Bethel CL, Murphy RR (2010) Review of human studies methods in HRI and recommendations. Int J Soc Robot 2:347–359.

    Article  Google Scholar 

  3. 3.

    Blackwell AF (2015) HCI as an inter-discipline. Conf Hum Factors Comput Syst Proc 18:503–512.

    Article  Google Scholar 

  4. 4.

    Deleuze G, Guattari F, Massumi B (2004) A thousand plateaus. Continuum International Publishing Group, New York

    Google Scholar 

  5. 5.

    Dobson SD, Sherwood CC (2011) Correlated evolution of brain regions involved in producing and processing facial expressions in anthropoid primates. Biol Lett 7:86–88.

    Article  Google Scholar 

  6. 6.

    Pascalis O, Kelly DJ (2009) The origins of face processing in humans phylogeny and ontogeny. Perspect Psychol Sci 4:200–209.

    Article  Google Scholar 

  7. 7.

    Ekman P (1992) An argument for basic emotions. Cogn Emot 6:169–200.

    Article  Google Scholar 

  8. 8.

    Barrett R, Katsikitis M (2003) Foreign faces: a voyage to the land of EEPICA. In: Katsikitis M (ed) The human face: measurement and meaning. Springer, Berlin, pp 1063–1065

    Google Scholar 

  9. 9.

    Argyle M (1973) Social interaction. Transaction Publishers, Piscataway

    Google Scholar 

  10. 10.

    McMahon AP, MacCurdy E (2006) The notebooks of Leonardo da Vinci. Parnassus 10:33.

    Article  Google Scholar 

  11. 11.

    Little CT, Stein WA (2006) The face in medieval sculpture. In: Timeline art hist. Accessed 28 Aug 2019

  12. 12.

    Bennett C, Šabanović S (2015) The effects of culture and context on perceptions of robotic facial expressions. Interact Stud Stud Soc Behav Commun Biol Artif Syst 16:272–302.

    Article  Google Scholar 

  13. 13.

    Kishi T, Hashimoto K, Takanishi A (2019) Human-like face and head mechanism. In: Goswami A, Vadakkepat P (eds) Humanoid robotics: a reference. Springer, Dortrecht, pp 571–596

    Google Scholar 

  14. 14.

    Mirnig N, Strasser E, Weiss A et al (2015) Can you read my face? a methodological variation for assessing facial expressions of robotic heads. Int J Soc Robot 7:63–76.

    Article  Google Scholar 

  15. 15.

    Brave S, Nass C, Hutchinson K (2005) Computers that care: investigating the effects of orientation of emotion exhibited by an embodied computer agent. Int J Hum Comput Stud 62:161–178.

    Article  Google Scholar 

  16. 16.

    McCloud S (1993) Understanding comics: the invisible art. Harper Collins, New York

    Google Scholar 

  17. 17.

    Bollmer G (2019) Automation of empathy. Esse Arts + Opin.

    Article  Google Scholar 

  18. 18.

    Munster A, Barker M (2016) The mutable face. In: Hinkson M (ed) Imaging identity. ANU Press, Canberra, pp 101–116

  19. 19.

    Dinnen Z, McBean S (2018) The face as technology. New Form 93:122–137.

    Article  Google Scholar 

  20. 20.

    Pearl S (2010) About faces. Physiognomy in Nineteenth-century Britain. Harvard University Press, Cambridge

    Google Scholar 

  21. 21.

    Deleuze G, Guattari F (1994) What is philosophy?. Columbia University Press, New York

    Google Scholar 

  22. 22.

    Latour B (1988) Science in action: how to follow scientists and engineers through society. Harvard University Press, Cambridge

    Google Scholar 

  23. 23.

    de Graaf MMA (2016) An ethical evaluation of human–robot relationships. Int J Soc Robot 8:589–598.

    Article  Google Scholar 

  24. 24.

    Gonsior B, Sosnowski S, Mayer C et al (2011) Improving aspects of empathy and subjective performance for HRI through mirroring facial expressions. In: Proceedings IEEE international work robot human interaction communication, pp 350–356.

  25. 25.

    Malle BF, Scheutz M, Forlizzi J, Voiklis J (2016) Which robot am I thinking about? The impact of action and appearance on people’s evaluations of a moral robot. In: ACM/IEEE international conference human–robot interaction 2016-April, pp 125–132.

  26. 26.

    Haxby JV, Hoffman EA, Gobbini MI (2000) The distributed human neural system for face perception. Trends Cogn Sci 4223–233(4):223–233.

    Article  Google Scholar 

  27. 27.

    Domes G, Schulze L, Herpertz SC (2009) Emotion recognition in borderline personality disorder—a review of the literature. J Personal Disord 23:6–19.

    Article  Google Scholar 

  28. 28.

    Duchenne de Boulogne GB, Cuthbertson R, Andrew R (1990) The mechanism of human facial expression. Cambridge University Press, Cambridge

    Google Scholar 

  29. 29.

    Darwin C (1872) The expression of the emotions in man and animals. John Murray, London

    Google Scholar 

  30. 30.

    Allday E (2012) Study gives insight into facial recognition. SF Gate, San Francisco

    Google Scholar 

  31. 31.

    Lopatina OL, Komleva YK, Gorina YV et al (2018) Neurobiological aspects of face recognition: the role of oxytocin. Front Behav Neurosci 12:1–11.

    Article  Google Scholar 

  32. 32.

    Bennett CC, Šabanović S (2014) Deriving minimal features for human-like facial expressions in robotic faces. Int J Soc Robot 6:367–381.

    Article  Google Scholar 

  33. 33.

    Reyes ME, Meza IV, Pineda LA (2019) Robotics facial expression of anger in collaborative human–robot interaction. Int J Adv Robot Syst 16:1–13.

    Article  Google Scholar 

  34. 34.

    Chesher C (2012) FURO at robotworld: human–robot metacommunication and media studies. Cultural Studies Association of Australia, Adelaide, pp 1–10

    Google Scholar 

  35. 35.

    Hanheide M, Wrede S, Lang C, Sagerer G (2008) Who am I talking with? A face memory for social robots. In: Proceedings IEEE international conference robot automation, pp 3660–3665.

  36. 36.

    Microsoft (2019) Facial Recognition. In: Microsoft Azur. Accessed 3 Sep 2019

  37. 37.

    Angerer M-L, Bösel B (2016) Total affect control. Digit Cult Soc.

    Article  Google Scholar 

  38. 38.

    Boas F (2006) Primitive Art. In: Perkins M, Morphy H (eds) The anthropology of art. A reader, Blackwell, Malden MA, pp 39–55

    Google Scholar 

  39. 39.

    Da Vinci L, Suh H (eds) (2013) Leonardo's notebooks:writing and art of the great master. RunningPress, New York, p 379

    Google Scholar 

  40. 40.

    Huberman D, Fleischer M, Vogman E (eds), Lillis SB (Trans) (2015) The cube and the face: around a sculpture by Alberto Giocometti. Diaphanes, Zurich

  41. 41.

    Giacometti A (2015) Cited in Huberman D, Fleischer M, Vogman E (eds), Lillis SB (Trans) (2015) The cube and the face: around a sculpture by Alberto Giocometti. Diaphanes, Zurich, p 15

  42. 42.

    Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Rob Auton Syst 42:143–166.

    Article  MATH  Google Scholar 

  43. 43.

    Kress GR, Van Leeuwen T (2006) Reading images: the grammar of visual design. Routledge, London

    Google Scholar 

  44. 44.

    Kriz S, Ferro TD, Damera P, Porter JR (2010) Fictional robots as a data source in HRI research: exploring the link between science fiction and interactional expectations. In: Proceedings IEEE international work robot human interaction communication, pp 458–463.

  45. 45.

    Sandoval EB, Mubin O, Obaid M (2014) Human robot interaction and fiction: a contradiction. In: Beetz M, Johnston B, Williams MA (eds) Social Robotics. ICSR 2014. Lecture Notes in Computer Science, vol 8755. Springer, Cham

  46. 46.

    Breazeal C (2009) Role of expressive behaviour for robots that learn from people. Philos Trans R Soc B Biol Sci 364:3527–3538

    Article  Google Scholar 

  47. 47.

    Esau N, Kleinjohann L, Kleinjohann B (2006) Emotional communication with the robot head MEXI. In: 9th international conference control automation robot vision, 2006, ICARCV’06.

  48. 48.

    Floridi L (2014) Smart, autonomous, and social: robots as challenge to human exceptionalism. In: Seibt J, Hakil R, Nørskov M (eds) Sociable robots and the future of social relations. IOS Press, p 11

  49. 49.

    Kahn PH, Reichert AL, Gary HE et al (2011) The new ontological category hypothesis in human-robot interaction. In: HRI 2011—Proceedings of 6th ACM/IEEE international conference human–robot interaction, pp 159–160.

  50. 50.

    Wu T, Butko NJ, Ruvulo P et al (2009) Learning to make facial expressions. In: 2009 IEEE 8th international conference on development learning ICDL 2009, pp 1–6.

  51. 51.

    Andreallo F, Chesher C (2019) Prosthetic soul mates: sex robots as media for companionship. M/C J Vol 22, No 5. Accessed 17 Feb 2020

  52. 52.

    Suchman L (2011) Subject objects. Fem Theory 12:119–145.

    Article  Google Scholar 

  53. 53.

    Softbank (2019) Interacting with pepper. In: Aldebaran doc. Accessed 18 Sep 2019

  54. 54.

    Collins K (2014) Meet Jimmy, Intel’s 3D-printed robot for consumers. In: Wired. Accessed 22 Dec 2019

  55. 55.

    Kac E (1997) Foundation and development of robotic art. Art J 56:60–67

    Article  Google Scholar 

  56. 56.

    Stelarc (2003) STELARC | PROSTHETIC HEAD. Accessed 12 Dec 2019

Download references

Author information



Corresponding author

Correspondence to Chris Chesher.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chesher, C., Andreallo, F. Robotic Faciality: The Philosophy, Science and Art of Robot Faces. Int J of Soc Robotics 13, 83–96 (2021).

Download citation


  • Robot face
  • Faciality
  • Social robotics
  • Philosophy
  • Deleuze
  • Guattari
  • Human–robot interaction