Advertisement

AI & SOCIETY

pp 1–16 | Cite as

One robot doesn’t fit all: aligning social robot appearance and job suitability from a Middle Eastern perspective

  • Jakub ZłotowskiEmail author
  • Ashraf Khalil
  • Salam Abdallah
Student Forum
  • 21 Downloads

Abstract

Social robots are expected to take over a significant number of jobs in the coming decades. The present research provides the first systematic evaluation of occupation suitability of existing social robots based on user perception derived classification of them. The study was conducted in the Middle East since the views of this region are rarely considered in human–robot interaction research, although the region is poised to increasingly adopt the use of robots. Laboratory-based experimental data revealed that a robot’s appearance plays an important role in the perception of its capabilities and preference for it to perform a particular job. Participants showed a preference for machine-like robots to perform dull and dirty occupations and humanoids, but not androids, to perform jobs requiring extensive social interaction with humans. However, other aspects of appearance than morphology determine whether a robot is preferred for a job irrespective of its perceived capability to do it.

Keywords

Social robotics Human–robot interaction Jobs Appearance Middle East 

1 Introduction

Robots are expected to be part of the forthcoming industrial revolution that will change how we live and work. In the past they were used only in factories, but advances in technology have allowed them to infiltrate everyday human environments and engage in natural communication with humans. Although they are still not as common as industrial robots, the number of social robots will only increase in the coming decades. Therefore, it is important to understand how they will affect society and what roles people expect them to have.

The close relationship between robots and work is evident from the etymology of the word robot and goes back to the 1920s when it was for the first time used in a play by Karol Čapek. The word itself is derived from the Czech word “robota”, meaning hard work such as in the case of slavery. In science fiction, robots have since been depicted as performing various jobs. These include dangerous, dull and dirty jobs that humans do not want to perform, such as houseworkers (e.g. Rosey in The Jetsons) and sexworkers (e.g. Zhora and Pris in Blade Runner). However, we can also find examples of robots occupying jobs that require social competencies, such as elderly caretakers (e.g. Robot in Robot & Frank) or detectives (e.g. R. Daneel Olivaw in Caves of Steel). Some writers even imagine a future were humans do not need to work as all jobs are performed by robots, as in Stanisław Lem’s Return from the Stars.

Some of the jobs performed by robots that are presented in science fiction have found their realization in real life. With the impending progress of technology, we can expect that robots will take an even greater number and variety of jobs. On the one hand, introduction of robots to the workforce creates opportunities for improving the quality of work and reduction of production costs. On the other hand, it poses social challenges that require attention from the scientific community. The discussion regarding a future where robots take over jobs from humans is no longer only an academic discourse. It has become a topic of interest to the general public, with several articles in mass media presenting a vision of the future where only a few people are able to work (McNeal 2015; Solon 2016).

Previous studies focused on the general public’s preferences regarding jobs that should be performed by robots (Takayama et al. 2008; Ju and Takayama 2011). This is a valuable contribution as it can guide design of robotic platforms for tasks that people favor them to do. However, these studies evaluated robots’ suitability for jobs without considering their appearance. Yet, recent studies show that anthropomorphic (Yogeeswaran et al. 2016) and autonomous (Złotowski et al. 2017) robots are already perceived as especially threatening to human jobs and resources. Therefore, it is important to understand expectations regarding the tasks that robots should perform, as well as their appearance, if they are to remain socially acceptable.

In this paper, we present a study that investigates the relationship between robots’ appearance and jobs that people prefer them to perform. Since culture is an important factor affecting a society’s choices regarding robot appearance, the cultural background has to be considered. The vast majority of studies in the field of human–robot interaction (HRI) take place in the West and Far East. This poses a risk that the developed technology will represent the values of cultures from which they are derived and can lead to social rejection in the other regions. Therefore, we conducted an empirical study with a sampled population in the United Arab Emirates to better understand expectations regarding social robots in the Middle East, a region that greatly differs from the West and Far East.

2 Background

2.1 What jobs should be taken over by robots?

Up to now, robots have predominantly been used for tasks that are regarded as dull, dangerous and dirty. However, roboticists have proposed the use of robots for roles that are especially humane in nature, such as therapists for autistic children (Dautenhahn and Billard 2002), museum receptionists (Shiomi et al. 2007), tutors in a school (Tanaka et al. 2007), elderly caretakers (Sorbello et al. 2016; Heerink et al. 2009) and guides in a shopping mall (Kanda et al. 2008). Although improved functionality enables robots to perform these tasks with greater precision, it does not warrant the social acceptance of these robots. Society shapes technology, but at the same time technology shapes society (Bijker 1993; Sabanovic 2010). The future usage of robots will affect how people work and live (Royakkers and van Est 2015). Therefore, it is important to understand the general public’s expectations and concerns in order to facilitate social acceptance and positive user experience of robots, considering that experts and non-experts differ in what robots could and should do (Ju and Takayama 2011).

The social dimension of using robotic systems in industry has received some attention (Moniz and Krings 2016), and the use of robots in natural human environments is a heavily studied topic. People have both positive and negative expectations regarding a future with social robots. On the one hand, they expect that robots will decrease the number of causalities and improve work efficiency and convenience. On the other hand, people are concerned about robots’ lack of social skills and human job loss (de Graaf and Allouch 2016). Dautenhahn et al. (2005) found that people prefer to have robots doing household duties rather than serving as assistants for childcare duties. In another survey, people held a neutral to positive impression of robots, and it was especially positive toward robots that can perform tasks that are perceived as dull and dirty (Arras and Cerqui 2005). However, those surveyed did not oppose robots serving as assistants as long as they were no longer able to perform these tasks themselves. Moreover, they did not favor humanoid appearance for a robot.

People generally not only have a positive attitude toward robots performing dangerous, dull and dirty tasks, but they actually prefer them to be performed by robots rather than humans. Hayashi et al. (2010) reported that participants in their study preferred robots to perform troublesome tasks rather than humans, even if these tasks required interaction with people. This indicates that under certain circumstances people are more than willing to accept the introduction of robot jobs in everyday human environments.

Similar motivation can be found behind the survey on the suitability of robots for various jobs conducted by Takayama et al. (2008). In that work, they investigated which occupations robots should be permitted to hold. They included in their survey a variety of jobs going beyond the simplest and least liked jobs and wanted to understand which of them should be performed by robots and which by humans. Robots were preferred for occupations which require memorization, keen perceptual abilities and service orientation. On the other hand, people were preferred for tasks that require artistry, evaluation, judgment and diplomacy.

These studies provide a valuable indication as to the tasks for which robots should be developed. However, these studies either presented a single robot to participants or did not present any robot and left it to people’s imagination what a robot looks like. Since most people are not aware of the latest developments in robotics research, their impression of robots is greatly affected by media (Złotowski et al. 2015). That may have direct consequences for research on the occupation suitability of robots as their choices can be based on types of robots and robot jobs that appear in movies. Goetz et al. (2003) and Li et al. (2010) proposed that robots’ appearance should match the tasks they are designed for. De Graaf and Allouch (2015) surveyed the suitability of different roles for domestic social robots. They found that a functional appearance is preferred for a butler role and zoomorphic appearance is preferred for a companion role. In this paper we will focus on existing robotic platforms in order to understand what types of social robots are the most suitable for various occupations.

2.2 People’s expectations regarding robots’ appearance

Nomura et al. (2008) found that people favor humanoid robots over animal-like robots for concrete tasks. In another study, Compleston and Bugmann (2008) asked people what type of tasks should be performed by humanoids used at homes. Respondents proposed housework, food preparation and personal service.

Individual differences play an important role in human choices for robot jobs. Katz and Halpern (2014) found that participants who had negative attitude toward robots were unwilling to accept them for social companionship and surveillance and personal assistant jobs. On the other hand, robot liking positively correlated with social companionship and surveillance occupations.

Oestreicher and Eklundh (2006) investigated people’s impression of service robots by asking participants to draw them. They found that the drawings were heavily influenced by science fiction movies. Although both humanoid and machine-like robots were drawn, the latter type was predominant. In another study, people were asked to design a robot that can interact with humans on the street imagined it to have a machine-like appearance, but with several human features, such as eyes and a mouth (Foerster et al. 2011).

The appearance of a robot affects not only which tasks people believe it should perform, but also its perceived humanness (DiSalvo et al. 2002), personality and acceptance (Syrdal et al. 2007), and children’s attitude towards it (Dautenhahn and Billard 2002). There are also gender differences in perception of robots. Females have more negative attitudes toward interaction with robots than males (Nomura et al. 2009). Moreover, males anthropomorphize robots more and show more socially desirable responses toward a robot than females (Schermerhorn et al. 2008).

2.3 Cultural differences in HRI

If people base their impression of robots on media depiction of them, it should not be surprising that there are cultural differences in HRI. Existing studies focused on comparisons between the West (Europe, USA) and Far East (Japan, Korea, China) cultures, where the majority of robotic platforms are being developed (Bartneck et al. 2006, 2008; Evers et al. 2008; Joosse et al. 2014; Lee et al. 2016; MacDorman et al. 2008). Participants from these regions provided a convenient sample for researchers as they had robotic platforms in place and had a better understanding of local customs and expectations. Such research helps to build robots designed for the local populations and serve their needs. However, that also means that these robots represent values and preferences of people from only these parts of the world, which may hamper their social acceptance in other regions.

The Middle East is a region that greatly differs from both the West and Far East in cultural, religious and historical perception of the world. However, few studies have investigated HRI with Middle Eastern participants or in a Middle Eastern context. This gap in the literature can lead to designs that offer a negative user experience and lower technology acceptance (Straub et al. 2003). The work from the HCI field on the use of information technology in Syrian classrooms can serve as an example of this issue (Albirini 2006).

The first study in this region on perception of robots was conducted by Riek et al. (2010). They brought a geminoid (highly human-like robot that is a replica of a real person) of Ibn Sina (famous scientist from the Middle East who lived in the tenth–eleventh century) to a shopping mall in the UAE. They allowed visitors to interact with the robot and afterwards asked them about their attitude toward androids, noting from which region the mall visitors originally hailed. The results suggest that people from the Gulf perceive androids more favorably than do people from Africa.

The only other study in this region involved Hala, a receptionist robot that resembles a Middle Eastern woman. In that experiment, Salem et al. (2014) compared perceptions of English-speaking and Arabic-speaking participants. They found that Arabic-speaking participants perceived the robot more positively and anthropomorphized it more than English-speaking participants.

These two studies provide initial insights into the perception of robots in the Middle East. However, it is difficult to draw conclusions from them as both robots had Arab appearance and the results may be due to in-group versus out-group favoritism (Billig and Tajfel 1973). In addition, in the study conducted by Riek et al. (2010), participants were surveyed immediately following their interaction with the Ibn Sina android, which might have affected their expectations regarding androids in general.

The work of MacDorman et al. (2008) shows that cultural differences in HRI can be exhibited using indirect measures, which are less prone than self-reports to a tendency to provide socially desirable responses. Previous studies in the Middle East only used questionnaires. Therefore, in the present work we employed an indirect measurement to understand the local population’s preferences for robot occupations.

The region is strongly influenced by Islamic culture, in which the human form plays a special role, which may affect the social acceptance of androids more than in the Far East or the West.

2.4 Human form in the Middle East

Given the unique background of the Middle East, aniconism plays an important role in the predominantly Islamic culture of the region. According to Woodcock (2013), Islam forbids creating any representation of God, who is rather considered an affective force which each individual should experience on his/her own, so that they can develop their own mental image of Him. In Muslim/Islamic culture, statues and/or sculpture of human form are reminiscent of pre-Islamic pagan times in Mecca during Prophet Mohamed’s (PBUH) life and are associated with polytheism seen to propagate a culture of worshipping several gods in the form of idols, rather than praying to only one God (as per the message of Islamic monotheism). So basically building idols (be it in the form of statues, sculptures or figurines) to be worshipped is considered unlawful as per fundamental Islamic teachings (Hussein 2009).

Now the question that challenges modern scholarship of Islam is to figure out the extent to which it is permissible to allow man-made objects closely resembling human form to enter/penetrate the social life of a largely Muslim community/nation/region. There seems to be a consensus among scholars upon how the ‘purpose’ and ‘reason’ behind the creation and/or usage of a humanoid greatly determines its acceptance and permissibility within such a society. If figures, statues, sculptures, paintings, photographs and other forms of art depicting/resembling human beings (dead or alive) are created with the express purpose of being worshipped, then it is considered to be an act of disbelief. However, if the purpose behind creating such a piece is not to worship, then it may not be considered an unlawful act. Additionally, from an Islamic point of view an artist’s intent when creating art is also of essence; if he/she is drawing/painting with the intent of admiring God’s creation then it is an act of humility and perfectly permissible, but if the same artist draws/paints with the intention of ‘imitating’ God and considering himself/herself to be a creator like God, then it is considered an act punishable by the Divine (Al-Qaradawi 1999). In a general effort to avoid approaching this fine line between permissibility and non-permissibility, Muslim societies have traditionally pursued a culture of art and design that does not include animate forms. While geometric and floral designs are celebrated, animate forms, especially those which are exceptionally life-like, may stir feelings of doubt and uneasiness. For this reason, our study includes exploration of the sentiments of Middle Easterners toward a range of life-like robots.

2.5 Research goals and questions

Based on the above literature review and the limitations of previous studies, the present research sets threefold goals that go beyond the current state of the art. First, previous research did not present robots to the participants, e.g. Takayama et al. (2008), or used robots that did not resemble existing social robots, e.g. Li et al. (2010). It is questionable whether the findings regarding job suitability of a small robot built with Lego bricks can be generalized for existing humanoids. Therefore, we focus on existing robotic platforms.

Second, the studies that involved existing robots, e.g. de Graaf and Allouch (2015), evaluated their suitability for different tasks based on taxonomy proposed by Fong et al. (2003) that is hypothetical and does not represent how people classify the appearance of social robots (Rosenthal-von der Pütten and Krämer 2014). There are several limitations of this approach that affect the external validity of that research: including only a subset of social robot appearances, differentiating robots that are perceived similarly and not differentiating robots that belong to different categories, e.g. people perceive androids and humanoids differently (Rosenthal-von der Pütten and Krämer 2014), while these robots were used as a single category in Fong’s taxonomy. Therefore, this limits the findings’ generalizability. In the present study, for the first time, we evaluate job suitability of robots representing a broad range of social robot categories based on their perceived classification by the users.

Third, with the majority of previous studies focused on the West and Far East, we conduct instead an exploratory work in the Middle East to shed light on the expectations of this greatly overlooked region.

In this paper we address the following research questions:

RQ1 What types of robots are perceived as more capable and are preferred for doing different jobs?

RQ2 Do implicit preferences differ from explicit preferences for jobs being done by robots?

RQ3 How does a robot’s appearance affect how people perceive it?

3 Method

3.1 Participants

Seventy-nine participants were recruited from two universities in Abu Dhabi in exchange for inclusion in a raffle draw for five vouchers each with a value of $25. The population of the UAE consists primarily of three main groups: the Emirati people, Arabs from other countries and other Asians (primarily Indians, Pakistanis and Filipinos). Due to this fact, we decided to include participants in our study who represent these three major groups. Participants whose nationality did not fit one of these groups, who lived in the UAE less than 5 years or who performed the indirect measure task not using a hand with which they usually operate a computer, which may have artificially affected their response times and selection trajectories (some left handed participants usually employ their right hand to operate a computer), were excluded. The remaining sample included 62 participants and was almost evenly distributed across gender and region of origin (see Table 1).
Table 1

Distribution of participants according to region of origin and gender

 

Emirati

Arab

Asian

Male

10

11

10

Female

10

11

10

All participants were students and represented various departments. Twenty-one participants indicated that they had seen or interacted with at least one of the robots used in this study. However, their exposure to social robots was limited as only thirteen participants indicated that they have seen or interacted with them more than once. Participants’ ages ranged between 18 and 32 years (\( {\text{M}} = 21.74,\,{\text{SD}} = 2.84 \)).

3.2 Materials

In order to cover a broad range of social robot categories we based our selection on work of Rosenthal-von der Pütten and Krämer (2014), in which the researchers conducted a cluster analysis of these robots on six dimensions (threat, likability, submissiveness, unfamiliarity, human-likeness and machine-likeness) and found that the robots belong to six groups: small, playful; colorful, unusually shaped; threatening androids; likable androids; threatening mechanical; unfamiliar, futuristic. In our study we decided to include only one android (Ibn Sina). This robot belonged to the less likable group of androids (Rosenthal-von der Pütten and Krämer 2014), but considering the preference for in-group members over out-group members (Billig and Tajfel 1973), Ibn Sina’s Arabic appearance may be more acceptable and a better representative of the android group in our study.

As the indirect measure that we used in our study permits only four images to be displayed, we had to deselect one additional cluster identified by Rosenthal-von der Pütten and Krämer (2014). The other cluster that we decided not to include in our study consists of colorful and unusually shaped robots, as these robots fared in the middle on most evaluated dimensions.

In order to reduce a potential fatigue of participants caused by multiple evaluations required for each robot stimuli, we decided to use a single robot from each of the selected clusters. Therefore, the following types of robots were evaluated in our study (see Fig. 1):
Fig. 1

Pictures of robots used in the study. From the left: PR2, ASIMO, Ibn Sina, Twendy-One

Threatening, mechanical with no/minimal facial features—PR2.

Softly shaped, likable and non-threatening with childlike characteristics—ASIMO.

Android—Ibn Sina.

Least familiar, futuristic, mechanical—Twendy-One.

3.3 Procedure

Due to the multicultural nature of the UAE, a significant proportion of non-Arab population does not speak Arabic as English is widely used in everyday life. Since the teaching language at both universities from which we recruited participants is English, all instructions and measures were presented in this language. Participants were informed as to the purpose of the study and signed a consent form. Then, participants were asked to perform a computerized task programmed in MouseTracker software (Freeman and Ambady 2010). Participants had to first select which of four robots is the most capable of performing a job and later, assuming equal capability of all robots, with which robot they would be the most comfortable with regard to performing a job.

In the next part of the study, participants were asked to evaluate each robot (PR2, ASIMO, Ibn Sina and Twendy-One) on the following scales: likability, fear toward the robot, desire to own the robot and anthropomorphism. Moreover, in order to see if there are any occupations that participants would like to see the robots used for, and which were not evaluated in the MouseTracker task, participants were requested to write suitable applications for each robot. The order in which the robots were evaluated was randomized. The questionnaires were followed by demographic questions and debriefing. The entire study took approximately 25 min per participant.

3.4 Measures

3.4.1 Likability

Participants completed five items assessing the extent to which they perceived a robot as likable. The evaluation was done using the Godspeed likability scale (Bartneck et al. 2009). The items were rated on a five-point semantic-differential scale. Sample items included: Dislike–Like, Awful–Nice.

3.4.2 Fear toward the robot

Participants completed three items assessing the extent to which they feared each robot. The scale was adapted from (Ferrari and Paladino 2014). Sample items included: “I am afraid of this robot.”, “The robot makes me feel uncomfortable.” These items were measured on a seven-point Likert scale from strongly disagree (1) to strongly agree (7).

3.4.3 Desire to own the robot

Participants answered the question: “To what extent would you like to own this robot?” It was measured on a seven-point Likert scale from not at all (1) to very much (7).

3.4.4 Anthropomorphism

Participants completed ten items assessing the extent to which they anthropomorphized each robot. These items were adapted from work on dehumanization (Haslam et al. 2009), and have been used in other research in the context of HRI (Złotowski et al. 2014). Only human traits were evaluated as this dimension differentiates humans from automata (Haslam et al. 2009). Sample items included “The robot is curious” and “The robot is aggressive”. These items were measured on a seven-point Likert scale from strongly disagree (1) to strongly agree (7).

3.4.5 Android gender

Participants answered a single choice question about the preferred gender of androids. Four options were available: female, male, either (both female and male are equally appropriate) or neither (a robot should not resemble a human). We decided to include “neither” as a response in order not to artificially force participants to choose a gender of androids if they object the idea of creating robots that resemble humans.

3.4.6 Job preferences

The MouseTracker task was performed on a 14″ laptop running Windows 8 with resolution set to 1366 × 768. Participants had to first select which of four robots is the most capable of performing a job and later, assuming equal capability of all robots, with which robot they would be the most comfortable with regard to performing a job. In each trial, participants clicked a “Start” button in the center of the screen, and a job name appeared in its place. Participants made their selection by clicking on one of the robot pictures presented in each of the four corners of the screen (see Fig. 2). The following job names were displayed: Child-minder, Elderly care, Receptionist, Teacher, Cleaner, Companion, Servant, Salesman, Nurse, Clerk, Guide and Security. These jobs represent professions commonly proposed by researchers as suitable for robots. The order of job names were randomized. Following the recommendation of Freeman and Ambady (2009), if participants did not start moving the mouse within 400 ms a warning message was displayed. Prior to the actual task, participants had five practice trials to become familiar with the task. During the categorization we recorded x and y coordinates of mouse movements, which were then converted to trajectories using the MouseTracker software (Freeman and Ambady 2010). As indirect measures reflect uncertainty of a response in this task, we measured the response time and maximum deviation from a straight line between the center of the screen and the selected robot picture.
Fig. 2

Screenshot from MouseTracker task with a job name displayed in the center of the screen and pictures of robots in the corners

4 Results

4.1 Perception of robots

4.1.1 Likability

A composite measure of likability was created by averaging all five items on the scale after ensuring that it had strong internal consistency (lowest \( \alpha \) calculated separately for each robot > .85). The effect of robot type on likability was statistically significant, \( F(3, 183) = 26.53,\,p < .001 \). Post-hocs with Tukey adjustment for family-wise error revealed that ASIMO (\( {\text{LS}}_{\text{mean}} = 4.33,\,{\text{SE}} = 0.13 \)) was statistically significantly more likable than both Ibn Sina (\( {\text{LS}}_{\text{mean}} = 2.9,{\text{SE}} = 0.13 \)), \( t(183) = 8.11,p < .001 \), and PR2 (\( {\text{LS}}_{\text{mean}} = 3.25,{\text{SE}} = 0.13 \)), \( t(183) = 6.13,p < .001 \). Furthermore, Twendy-One (\( {\text{LS}}_{\text{mean}} = 3.9,{\text{SE}} = 0.13 \)) was more likable than both Ibn Sina, \( t(183) = 5.69,p < .001 \), and PR2, \( t(183) = 3.71,p = .001 \).

4.1.2 Fear toward the robot

A composite measure of fear toward each robot was created by averaging all three items on the scale after ensuring that it had strong internal consistency (lowest \( \alpha \) calculated separately for each robot > .82). The effect of robot type on fear was statistically significant, \( F(3, 183) = 8.08,p < .001 \). Post-hocs with Tukey adjustment for family-wise error revealed that participants feared Ibn Sina (\( {\text{LS}}_{\text{mean}} = 3.96,{\text{SE}} = 0.23 \)) statistically significantly more than ASIMO (\( {\text{LS}}_{\text{mean}} = 2.48,{\text{SE}} = 0.23 \); \( t(183) = 4.72,p < .001 \)), Twendy-One (\( {\text{LS}}_{\text{mean}} = 2.84,{\text{SE}} = 0.23 \); \( t(183) = 3.57,p = .003 \)) and PR2 (\( {\text{LS}}_{\text{mean}} = 3.11,{\text{SE}} = 0.23 \); \( t(183) = 2.71,p = .04 \)).

4.1.3 Anthropomorphism

A composite measure of anthropomorphism was created by averaging all 10 items on the scale after ensuring that it had satisfactory internal consistency (lowest \( \alpha \) calculated separately for each robot > .66). The effect of robot type on anthropomorphism was statistically significant, \( F(3, 183) = 4.92,p = .003 \). Post-hocs with Tukey adjustment for family-wise error revealed that participants anthropomorphized ASIMO statistically significantly more (\( {\text{LS}}_{\text{mean}} = 3.91,{\text{SE}} = 0.12 \)) than Ibn Sina (\( {\text{LS}}_{\text{mean}} = 3.53,{\text{SE}} = 0.12 \); \( t(183) = 2.9,p = .02 \)) and PR2 (\( {\text{LS}}_{\text{mean}} = 3.56,{\text{SE}} = 0.12 \); \( t(183) = 2.7,p = .04 \)). Moreover, Twendy-One (\( {\text{LS}}_{\text{mean}} = 3.89,{\text{SE}} = 0.12 \)) was anthropomorphized statistically significantly more than Ibn Sina, \( t(183) = 2.72,p = .04 \).

4.1.4 Desire to own the robot

The effect of robot type on participants’ willingness to own a robot was statistically significant, \( F(3, 183) = 31.09,p < .001 \). Post-hocs with Tukey adjustment for family-wise error revealed that participants wanted to own ASIMO (\( {\text{LS}}_{\text{mean}} = 5.36,{\text{SE}} = 0.21 \)) more than Ibn Sina (\( {\text{LS}}_{\text{mean}} = 2.77,{\text{SE}} = 0.21 \); \( t(183) = 8.94,p < .001 \)) and PR2 (\( {\text{LS}}_{\text{mean}} = 3.97,{\text{SE}} = 0.21 \); \( t(183) = 4.81,p < .001 \)). Moreover, they wanted to own PR2 statistically significantly more than Ibn Sina, \( t(183) = 4.14,p < .001 \), and less than Twendy-One (\( {\text{LS}}_{\text{mean}} = 4.89,{\text{SE}} = 0.21 \); \( t(183) = 3.19,p = .009 \)). Furthermore, participants showed a higher desire to own Twendy-One than Ibn Sina, \( t(183) = 7.32,p < .001 \).

4.1.5 Android gender

Fisher’s exact test of independence showed a significant effect of participants’ gender on the choice of androids’ gender (\( p = .008 \)), see Table 2. Post-hoc Fisher’s exact tests with Holm’s correction for family-wise error showed that the proportion of male participants choosing male compared to female as the preferred gender of androids was statistically significantly higher than for female participants (\( p = .03 \)). Furthermore, the proportion of male participants choosing male compared to androgynous gender was statistically significantly higher than for female participants (\( p = .02 \)). In addition, the proportion of male participants choosing male compared to preference for a robot not to resemble a human was statistically significantly higher than for female participants (\( p = .03 \)). For both sexes, the most common response was that they would prefer the robot not to resemble a human. However, if participants decided that a robot could resemble a human, they showed a preference for their own gender.
Table 2

The number of participants choosing preferred gender of androids grouped according to participants’ sex

Android gender

Female participants

Male participants

Female

7

5

Male

0

9

Either

12

7

Neither

12

10

4.2 Job type and occupation selection: MouseTracker

The selection of robots as the most capable of performing a job and participant preferences for them performing a job were analyzed first. In addition to these direct measures, participants’ response times and the maximum deviation (deviation from a straight line between the center of the screen and the selected robot picture) were also analyzed to see whether implicit responses differ from the explicit choice. Responses that were at least two median absolute deviations (MAD) away from the median response time were excluded. Absolute deviation around the median is recommended over traditionally used standard deviation around the mean for outlier detection (Leys et al. 2013).

4.2.1 Robot selection

The frequency of choosing a robot as the most capable of performing a job was compared between the jobs using the Chi square test. A statistically significant interaction between jobs and robots was found (\( \chi^{2} (33) = 88.76,p < .001 \)). Similarly, there was a statistically significant interaction between the jobs and participants’ preference for a robot to perform it (\( \chi^{2} (33) = 79.47,p < .001 \)). Posthoc analyses followed with the exact test of goodness-of-fit for each job (see Table 3). Where the exact test of goodness-of-fit was statistically significant binomial tests were conducted to determine for which robots the selection frequency was different than expected. The expected frequency used in the analysis was set as equal for each robot, i.e. each robot was equally likely to be selected. Holm’s correction for the family-wise error was applied for all goodness-of-fit and binomial tests.
Table 3

The significance level of the exact test of goodness-of-fit (overall effect) and frequency of robot selection for robot capability and robot preference as a function of jobs

Job

Robot capability

Robot preference

Overall effect

p value

PR2

Twendy

ASIMO

Ibn Sina

Overall effect

p value

PR2

Twendy

ASIMO

Ibn Sina

Child-minder

.002*

5*

15

27*

9

.01*

7

14

25*

8

Cleaner

< .001*

15

29*

14

1*

.01*

12

23*

15

4*

Clerk

.06

21

15

15

6

< .001*

15

18

22

1*

Companion

.04*

7

17

22

9

< .001*

5*

7

29*

15

Elderly caretaker

.59

9

13

15

15

.01*

4*

12

21

19

Guide

.003*

5*

22

22

8

.01*

8

20

22

7

Nurse

.003*

9

25*

20

3*

.007*

6*

21

23*

8

Receptionist

< .001*

5*

26

17

5*

.02*

14

18

19

5*

Salesman

.003*

7

16

26*

7

.02*

8

16

21

6

Security

< .001*

14

23

22

2*

< .004*

12

21

20

3*

Servant

.003*

7

23*

18

5*

.01*

7

24*

17

7

Teacher

< .001*

2*

13

27*

11

< .001*

2*

17

25*

12

*p < .05

A playful robot (ASIMO) was perceived as the most capable for the tasks of child-minder, salesman and teacher, and, irrespective of capabilities, participants preferred it for child-minder, companion, nurse and teacher occupations. At the same time, it was not perceived as less capable or less preferred than other robots for any of the jobs. The android (Ibn Sina) was neither found as more capable nor preferred for any of the jobs. It was also perceived as less capable and not preferred for cleaner, receptionist, security and servant jobs. Moreover, it was perceived as less capable for nurse tasks.

Another robot that was not perceived as more capable or preferred for any of the jobs was PR2. However, in the case of PR2, the lack of preference for its use and its capabilities differed. The mechanical robot (PR2) was perceived as less capable to do child-minder, guide, receptionist and teacher occupations. On the other hand, participants preferred it the least for companion, elderly caretaker, nurse and teacher jobs. The unfamiliar futuristic robot (Twendy-One) was perceived as the most capable for cleaner, nurse, receptionist, and servant jobs and it was preferred for cleaner and servant occupations. There were no jobs for which it was less preferred or less capable than other robots.

4.2.2 Response time

Data was analyzed using linear mixed-effects regression (LMER) with the lme4 package (Bates et al. 2015) in R (R Core Team 2015). By using random effects for subjects, we controlled for the individual variation in perception of robots and responses in indirect measures. The significance of independent variables was evaluated using ANOVA with Kenward–Roger degrees of freedom approximation provided by the lmerTest package (Kuznetsova et al. 2016). Participants’ gender and region of origin did not significantly interact with independent variables used in this study and were excluded from further analysis.

The response time was logarithmically transformed. The main effect of job type on response time for robot capabilities to perform a job was statistically significant, \( F(11, 563.16) = 1.86,p = .04 \). Moreover, there was a statistically significant interaction effect between robot and job type, \( F(33, 566.62) = 2.35,p < .001 \). Due to too high number of potential posthoc tests using full dataset, pairwise posthoc comparisons with Tukey correction were calculated between the robots for each job separately.

For guide, the response time was statistically significantly shorter for ASIMO (\( {\text{LS}}_{\text{mean}} = 7.31,{\text{SE}} = 0.08 \)) than for Ibn Sina (\( {\text{LS}}_{\text{mean}} = 7.62,{\text{SE}} = 0.11 \); \( t(566.42) = 2.61,p = .05 \)) and Twendy-One (\( {\text{LS}}_{\text{mean}} = 7.56,{\text{SE}} = 0.08 \); \( t(567.07) = 2.9,p = .02 \)). For teacher, the response time was statistically significantly longer for ASIMO (\( {\text{LS}}_{\text{mean}} = 7.64,{\text{SE}} = 0.07 \)) than for Ibn Sina (\( {\text{LS}}_{\text{mean}} = 7.3,{\text{SE}} = 0.1 \); \( t(576.53) = 3.22,p = .008 \)) and Twendy-One (\( {\text{LS}}_{\text{mean}} = 7.31,{\text{SE}} = 0.09 \); \( t(565.45) = 3.38,p = .005 \)). No other posthocs were statistically significant. On the other hand, there were no statistically significant main or interaction effects of robot and job types on response time for selection of a preferred robot to do a job.

4.2.3 Maximum deviation

There was a statistically significant main effect of robot type on maximum deviation for selection of robot capabilities to perform a job, \( F(3, 584.33) = 2.8,p = .04 \). However, no posthoc comparisons were statistically significant.

Moreover, there was a statistically significant main effect of robot type on maximum deviation for selection of robot preference to do a job, \( F(3, 603.98) = 2.91,p = .04 \). However, no posthoc comparison was statistically significant.

4.3 Job type and occupation selection: open choice

Participant descriptions of applications for the robots were coded by two coders blinded to experimental conditions. The coders were requested to code a description provided by participants as job names they are describing. Since participants could propose any task, the list of jobs was not fixed for coding purposes. These jobs were then converted to job families they belong to. A job family is a group of occupations based upon work performed, skills, education, training, and credentials, such as Community and Social Service or Personal Care and Service. For this purpose we used the U.S. Department of Labor’s O*NET occupational information database (https://www.onetonline.org/). The O*NET program contains 974 occupations. Each occupation includes a description and job family it belongs to. Job names that could not be found in the database were discarded. An exception was made for a companion, which is not considered a job, but is a social role frequently proposed by scientists as suitable for robots and was described multiple times by the participants.

The agreement of job families between two coders was substantial, Cohen’s \( \kappa = .79 \). Therefore, job families produced by one of the coders was used for further analysis. Participants proposed a wide range of tasks for the robots (18 out of 23 job families existing in O*NET database were described). The frequency of job families for each robot was calculated in order to see whether participants have different preferences depending on a robot’s appearance, see Fig. 3.
Fig. 3

Proportion of job families indicated as suitable for each robot

The most often proposed job families for ASIMO were: personal care and service (24%), education, training and library (12%), building and grounds cleaning and maintenance (10%), and protective service (10%). For Ibn Sina, participants predominantly suggested personal care and service (33%), education, training and library (23%), and office and administrative support (12%). PR2 was perceived as the most suitable for office and administrative support (19%), education, training and library (13%), and building and grounds cleaning and maintenance (11%). For Twendy-One, the most often indicated job families were personal care and service (22%), building and grounds cleaning and maintenance (17%), and education, training and library (15%).

5 Discussion

In this study we conducted an exploratory work on perception of social robots and their suitability for various occupations in the context of the Middle East. We investigated the impression of participants regarding the appearance of future social robots, and presented four images representing different categories of robots. We measured directly and indirectly what jobs these robots are perceived as most capable of performing and most preferred to be performing.

5.1 Perception of robots in the UAE

Since we used pictures of a single robot from each cluster derived from the work of Rosenthal-von der Pütten and Krämer (2014), it was important to verify whether the UAE participants perceive robots similarly to German participants in their study. The results of likability dimension suggest that ASIMO and Twendy-One were more likable than Ibn Sina and PR2. These findings are in line with the previous research (Rosenthal-von der Pütten and Krämer 2014) as ASIMO and Twendy-One were in clusters including likable robots, and Ibn Sina and PR2 belonged to clusters with low likability.

As discussed in the Materials section, we speculated that low likability of Ibn Sina by German participants could be due to its out-group appearance. However, to our surprise, in our study with a predominantly Arab population, Ibn Sina was also perceived as the least likable robot. Therefore, participants did not exhibit the expected in-group favoritism. Although these results could mean that people in general dislike androids, previous research (Rosenthal-von der Pütten and Krämer 2014) suggest that some androids can be perceived as likable. Therefore, the low likability of Ibn Sina in both studies suggests that its particular appearance rather than its “race” plays a major role in its perception, i.e. some other android resembling an Arabic person could potentially evoke more positive feelings. Ibn Sina’s ostentatious beard might have lead to attribution of seniority to it, which in turn might have made young people in our sample to perceive the robot more negatively as a result of dichotomy between appearance that evokes respect and awareness that the robot is not an actual human.

Currently, there are no female Arab looking androids to the best of our knowledge. However, future research may evaluate their perception, as research on sex differences shows that female form is favored over male form, especially by other females (Rudman and Goodwin 2004), but see Jung et al. (2016) for the opposite findings. The results of the question about an android’s preferred gender in our study are consistent with the similarity hypothesis which proposes that people prefer robots to exhibit characteristics similar to their own. We found that male participants preferred an android to have male appearance significantly more than female participants and no female participant showed a preference for male form over female form. However, Joosse et al. (2013) showed that it is more important that a robot’s personality matches its task rather than a participant’s personality. Therefore, it is possible that in an actual HRI, an android’s gender should match the stereotype of the job that it is performing rather than a user’s gender.

Nevertheless, the most common response for both sexes was that a robot should not resemble a human being. This is in agreement with the uncanny valley hypothesis proposed by Mori et al. (2012). Previous research showed that androids, especially those with capabilities exceeding human skills, were perceived by Americans as threatening with respect to identity and resources (Yogeeswaran et al. 2016). Future research should evaluate whether the lack of social acceptance of androids among some people in the Middle East has the same source.

The results of fear toward a robot questionnaire suggest that Ibn Sina was feared more than any other robot. Moreover, although not statistically significantly, PR2 evoked the second highest fear. These results are also in line with those of Rosenthal-von der Pütten and Krämer (2014).

On the anthropomorphism dimension, ASIMO and Twendy-One were attributed more human traits than Ibn Sina and PR2. While the low score of PR2 is consistent with previous work, the lowest score of Ibn Sina is unexpected since it was one of the most human-like robots in Rosenthal-von der Pütten and Krämer’s (2014) study. The biggest difference between their and our study is the measurement tool used to measure this dimension. In their study, a single item (“not at all human-like”–“very human-like”) was used. This item does not specify what is understood by human-likeness. Participants in that study might have focused solely on the appearance, i.e. the extent to which a robot resembles a human. However, in our study anthropomorphism was understood as attribution of core human characteristics and therefore focused more on the psychological aspect of human-likeness. Possibly, despite Ibn Sina’s highly human-like appearance, participants feared it and did not recognize it as having human nature traits, a process that is known as dehumanization (Haslam et al. 2009).

Considering the lowest likability and the highest fear evoked by Ibn Sina, it should not be surprising that participants showed the least desire to own it. Similarly, the most likable and least feared robots, ASIMO and Twendy-One, were the robots preferred to be owned. This indicates that social acceptance can be achieved by both human-like and machine-like robots. However, in case of the former, they should be easily distinguishable from humans, while the latter can have a futuristic, but non-threatening appearance.

Overall, the perception of the robots on the key three dimensions, with the exception of the anthropomorphism of Ibn Sina, is similar in the UAE to the results reported for German subjects (Rosenthal-von der Pütten and Krämer 2014), i.e. participants in both countries tended to attribute anthropomorphism, threat and likability to the robots to a comparable extent. This suggests that the local population of the UAE perceived the currently existing social robots similarly to the German population and the selected robot stimuli are representative for the four social robot types that we compared in our study.

5.2 Robot types and jobs

The results suggest that a preference for a robot to do a task and a perception of its capabilities to do it are often similar, but not for all types of robots. The results indicate that the robots which are the most likable and not threatening are also perceived to be the most capable and preferred to do jobs that require interaction with humans. In this study we used images of robots rather than having live HRI or using video material. That means that participants could not be aware of the android’s (Ibn Sina) limited capabilities. Yet its highly human-like appearance did not make it to be perceived as more capable than less human-like robots. Its low likability and threatening appearance may explain why participants did not prefer it for any of the jobs. Similarly, threatening mechanical robot’s (PR2) unfriendly appearance made it a bad choice for jobs requiring social interaction with humans.

None of the robots was perceived as more capable to serve as a companion, which could have been thought as an especially promising role for highly human-like androids. However, assuming equal capability of robots, participants indicated softly shaped robots with childlike characteristics (ASIMO) as the preferred choice. This appearance was also indicated as the most capable or preferred choice for other occupations that require the highest amount of social interaction, e.g. child-minder, nurse or teacher. The result suggests that playful looking humanoids that possess human features, but are easily distinguishable from humans, should be developed for occupations where social skills play a key role.

On the other hand, a machine-like robot with unfamiliar, but not threatening appearance is optimal for dull and dirty tasks. Although Twendy-One was perceived as the most capable for nurse, receptionist, cleaner and servant jobs, it was preferred only for the last two. Both occupations, cleaner and servant, do not require human-like form. However, nurse and receptionist jobs require social interaction with people. Unfamiliar looking futuristic appearance may suggest that this category of robots are capable of performing them, but when given choice people prefer a more human-looking robots.

The analysis of the indirect measures indicates that there are only few inconsistent differences in reaction times or trajectory deviations in selection of robots. Therefore, implicit and explicit preferences for robots performing the jobs do not differ and participant responses reflect their own preferences rather than socially desirable choices.

Apart from the specific jobs that we derived from literature, we also offered participants a chance to propose occupations for which each robot would be suitable. The results show that suitable job families depend on the appearance of a robot. A mechanistic and threatening was the only category of social robots for which participants did not indicate personal care and service as the most suitable service. However, the jobs proposed for PR2 that represented office and administrative support indicate that jobs requiring a moderate level of social skills may be suitable.

Participants found it harder to propose suitable occupations for an android than other social robot categories. The proposed jobs covered the smallest number of job families (12 out of 19) and were limited much more to the top three job families (78%) than for the other three robots. All three job families (personal care and service; education, training and library; and office and administrative support) require interaction with humans, and an android’s human-like appearance can be perceived as suitable for that. However, the above-described results of the MouseTracker task showed that even for these jobs other robots may be preferred.

Unfamiliar, mechanical and softly shaped with childlike characteristics robots were proposed as suitable for tasks that require social interaction, such as personal care or education, and that cover dull and dirty work, such as grounds cleaning and maintenance. Therefore, a humanoid or machine-like, but not threatening, appearance of a robot may be suitable for the broadest range of occupations. Furthermore, the results emphasize the importance of designing robots that are likable and not threatening; otherwise there is a risk that they will be rejected even though they may be perfectly capable of performing a specific task.

5.3 Cultural perspective

Looking at the demographics of our study, it included participants with various cultural background. This imposed a significant effect on the kind of job types carried out by the robots under study. For example, jobs like training, personal care and service were voted highest for an android. This can be linked to the studies conducted by (Haring et al. 2014) that investigated the perception of Japanese and Egyptian participants towards robots. Japanese preferred the robots to carry out daily chores because they held a pragmatic approach on letting someone in their daily lives. Similarly, in the Middle Eastern culture the appearance and “race” of Ibn Sina imposes a perception of not carrying out daily household chores.

If we look at the cultural dimensions in the Middle East, according to Hofstede’s cultural dimension UAE is a collectivist society (Hofstede-Insights 2017). Population belonging to this society have lower threshold on accepting suggestions and show greater engagement in the society. The interactions with the robots and the level of trust generated by the participants for unfamiliar mechanical and softly shaped with childlike characteristics robots can be explained through this notion. The roles given to them were crucial as they included childcare, personal care and household activities. Such interactions require trust and likability, which is evident in our findings. This finding is consistent with Li et al. (2010) who investigated China and Korea being collectivist society as well.

Looking at the role of gender, male participants in our study preferred a male robot and vice versa was noted for female participants. This is a particularly surprising finding considering how often predominantly male engineers of highly human-like robots in the West and Far East build robots that resemble a female form irrespective of a user’s gender. However, this can be justified by the segregation seen in the United Arab Emirates and a human-like robot of the opposite gender could make them uncomfortable due to segregated society and social norms.

5.4 Limitations and future work

In this study we used images of robots instead of physically collocated robots. This was a deliberate choice that offered pragmatic and methodological benefits. The various types of robots involved would make this study economically not feasible with collocated robots due to their high cost and limited availability at present. While we acknowledge that some other factors than appearance, which would require live HRI (e.g. different number of degrees of freedom or movement limitations) may play a significant role in determining how a robot is perceived or they may interact with appearance (van Straten et al. 2017) adding them in this work would create additional complexity that could blur the results rather than make them clearer with the current technology. If a researcher naively added motion default for each of these robots, then it would become impossible to determine whether any potential differences come from the appearance, motion or their interaction. On the other hand, the current robots, due to the limited degrees of freedom, make it impossible to systematically manipulate motion across diverse robotic platforms used in this study. Moreover, the previous research on the topic of perception of social robots and their job suitability involved images as stimuli. Using a similar methodology offers a better comparability of findings with the works done in the West and Far East. The chosen research method was appropriate for the current research goals—exploratory work on the perception of social robot appearance in the Middle East. However, an interesting venue for future work could be to replicate our findings using a subset of robots in live HRI and systematically manipulate other factors, such as motion or personality of a robot.

All our participants lived for a long time in the UAE (over 5 years), so the results may be culture-specific. It is possible that people in other countries are more open to the idea of using androids for some types of jobs. In addition, it is rare to have an Arab performing dull and dirty jobs in the UAE. Therefore, people living in the UAE may be more open to accepting androids resembling a member of another race for these types of occupations. We used a student sample in our exploratory work and future research with general public should verify whether the results are representative for the whole population.

Although our results show that perception of the robots was similar to a study in Germany (Rosenthal-von der Pütten and Krämer 2014), it is still possible that the UAE population may use different criteria for classifying robot categories. Moreover, it is possible that in the future robots with greatly different appearance will be developed that were not covered in the present work. Therefore, it will be necessary to replicate the classification of social robots and our findings to ensure that people still use the same criteria when evaluating appearance of these robots.

Another interesting venue for a follow up studies is focusing on the customer purchase preferences for social robots. On the one hand, the low declared desire to own Ibn Sina may be driven by its low likability and high perceived threat. On the other hand, our data shows that participants found Ibn Sina to be suitable for less job families than other robots, which may have led to reduced perceived usefulness of it. If the latter factor is the main driver of participants’ preferences, it may be necessary to expose people to real use cases for androids, due to limited prior experience with them. However, if the former factors are more important, it may be necessary to consider alternative designs for the robot.

Furthermore, a future line of research could focus on cultural differences in the effect that robot appearance evokes on human behaviour. Previous work showed that children in Japan bullied a robot that they met in a shopping mall (Brscić et al. 2015). Research could show whether such behaviour is also exposed in the Middle East and whether it is mediated by a robot’s appearance, e.g. will children bully a more human-like robot? Similarity, we are aware that people attribute blame for ethical errors differently to a robot than a human in the West (Malle et al. 2015). Could an appearance of a robot have a different effect on such attribution in different regions?

6 Conclusions

The present study was the first systematic exploratory work that evaluated perceived suitability of several social robot categories for various occupations. Although, the focus was on the underrepresented population of the UAE, the findings shed a new light on social robot acceptance in general. In particular, we found both similarities and differences in perception of social robots between the Middle East and, previously researched, the West and Far East. Despite the unique role that human form is assigned in Islam and historico-cultural differences discussed in the Background section, the evaluation of social robots across several dimensions was surprisingly similar. This may indicate that with increased globalization the role of shared pop-culture affects expectations regarding robots to a bigger extent than culture in which a person is raised. Alternatively, these characteristics may have a universal character.

In spite of these similarities in the perception of social robots, we noticed the importance of considering the impact of cultural factors on the social acceptance of these robots. The existing separation of males and females present in many public places in the UAE was exhibited in the preference for the same-gender or at least the lack of preference for opposite-gender androids. This seems to be in stark contrast with both the Far East and West, where we can see an ongoing discussion about the predominantly male engineers developing robots that resemble female shape.

Moreover, we found that people’s preferences for occupations for social robots are highly dependent on their appearance. However, it is not enough to design robots merely to optimize their human-likeness for their task performance. Factors affecting social acceptance, such as likability and non-threatening appearance, are more crucial in determining whether a robot will be preferred for a given job, which emphasizes the importance of social acceptance in robot design.

Our results show the benefits of using robot classifications that reflect the perception of robots by their users rather than hypothetical taxonomies as was done in the past research. This is evident by differentiating humanoid and android categories in our work. For jobs that require a significant amount of social interaction, playful child-like humanoids are perceived as the most capable and preferable. Their human-like features permit them to emit nonverbal social signals that are part of natural human–human interaction. This facilitates the interaction without evoking negative feelings as do androids, which despite their human-like appearance are less likable and more threatening.

On the other hand, if an occupation belongs to the dull, dangerous and dirty category, and it does not require much social communication with humans, a machine-like appearance of a robot is preferred. Nevertheless, even in the case of robots with a minimal amount of human features, a robot should have an appearance that is likable and not threatening. Otherwise, people will not want it for either personal or any social use.

Interestingly, we found that different categories of social robots are perceived as suitable for different jobs, but also that the appearance affects the number of jobs for which a robot will be perceived as suitable. This was especially pronounced in the case of an android for which participants identified the fewest suitable jobs. Therefore, the researchers developing androids may not only face a task of developing androids that will not be perceived as threatening due to their high human-likeness, but also need to identify a niche for these robots and introduce it to the general public. Otherwise, people may not see a benefit of using them.

Our results expand the findings of Goetz et al. (2003) by showing that choosing appearance based only on the specific occupation a robot is supposed to perform is a necessary, but not a sufficient factor determining acceptance of a social robot. The results highlight the importance of considering the emotions that social robots evoke in order for them to be accepted. Adding human-like features to robots is desirable when they are expected to take roles that require extended social interactions. However, the level of human-likeness offered by humanoids is sufficient and the higher realism of androids may paradoxically reduce their perceived capability and suitability to perform these social jobs.

Appearance is an important factor affecting what tasks people will be willing to delegate to robots. The expected introduction of a robotic workforce into social jobs can be facilitated by designing robots that meet the personal and cultural expectations regarding performance and appearance. This study provides the first insights into the suitability of social robots for various jobs with the special focus on the Middle East. It suggests that people in this region may consider multiple dimensions of appearance in their preferences for social robot workers and a physical form specifically designed for a given task may still be socially rejected.

Notes

Acknowledgements

The authors would like to thank Mohammad Gharib and Tasbeeh Raza for their invaluable help with participant recruitment.

References

  1. Albirini A (2006) Teachers’ attitudes toward information and communication technologies: the case of Syrian EFL teachers. Comput Educ 47(4):373–398Google Scholar
  2. Al-Qaradawi SY (1999) The lawful and the prohibited in Islam (al-halal wal haram fil Islam). American Trust PublicationsGoogle Scholar
  3. Arras KO, Cerqui D (2005) Do we want to share our lives and bodies with robots. A 2000-people survey, technical report Nr. 0605-001 Autonomous Systems Lab Swiss Federal Institute of TechnologyGoogle Scholar
  4. Bartneck C (2008) Who like androids more: Japanese or US Americans?” In: Proceedings of the 17th IEEE international symposium on robot and human interactive communication, RO-MAN, Munich, Germany, pp 553–557Google Scholar
  5. Bartneck C, Suzuki T, Kanda T, Nomura T (2006) The influence of people’s culture and prior experiences with Aibo on their attitude towards robots. AI Soc 21(1–2):217–230.  https://doi.org/10.1007/s00146-006-0052-7 Google Scholar
  6. Bartneck C, Kulic D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1(1):71–81Google Scholar
  7. Bates D, Mächler M, Bolker B, Walker S (2015) Fitting linear mixed-effects models using lme4. J Stat Softw 67(1):1–48.  https://doi.org/10.18637/jss.v067.i01 Google Scholar
  8. Bijker WE (1993) Do not despair: there is life after constructivism. Sci Technol Hum Values 18(1):113–138Google Scholar
  9. Billig M, Tajfel H (1973) Social categorization and similarity in intergroup behaviour. Eur J Soc Psychol 3(1):27–52.  https://doi.org/10.1002/ejsp.2420030103 Google Scholar
  10. Brscić D, Kidokoro H, Suehiro Y, Kanda T (2015). Escaping from children’s abuse of social robots. In: Proceedings of 2015 ACM/IEEE international conference on human–robot interaction. ACM, pp 59–66Google Scholar
  11. Compleston SN, Bugmann G (2008) Personal robot user expectations. In: Dowland P, Furnell S (eds) Advances in communications, computing, networks and security, vol 5. University of Plymouth School Of Computing, Communications And Electronics, Plymouth, UK, pp 230–238Google Scholar
  12. Dautenhahn K, Billard A (2002) Games children with autism can play with robota, a humanoid robotic doll. In: Robinson P, Keates S, Langdon P, Clarkson PJ (eds) Universal access and assistive technology: Proceedings of the Cambridge workshop on UA and AT’02. Springer London, London, pp 179–190.  https://doi.org/10.1007/978-1-4471-3719-1 Google Scholar
  13. Dautenhahn K, Woods S, Kaouri C, Walters ML, Koay KL, Werry I (2005) What is a robot companion—friend, assistant or butler? In: IEEE IRS/RSJ international conference on intelligent robots and systems, pp 1192–1197.  https://doi.org/10.1109/iros.2005.1545189
  14. de Graaf MM, Allouch SB (2015) The evaluation of different roles for domestic social robots. In: Robot and human interactive communication (RO-MAN), 24th IEEE international symposium on. IEEE, pp 676–681Google Scholar
  15. de Graaf MMA, Allouch SB (2016) Anticipating our future robot society: the evaluation of future robot applications from a user’s perspective. In: 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN), pp 755–762.  https://doi.org/10.1109/roman.2016.7745204
  16. DiSalvo CF, Gemperle F, Forlizzi J, Kiesler S (2002) All robots are not created equal: the design and perception of humanoid robot heads. In: Proceedings of the conference on designing interactive systems: processes, practices, methods, and techniques, DIS, London, United Kingdom, pp 321–326Google Scholar
  17. Evers V, Maldonado HC, Brodecki TL, Hinds PJ (2008) Relational vs. group self-construal: untangling the role of national culture in HRI. In: HRI 2008—Proceedings of the 3rd ACM/IEEE international conference on human–robot interaction: living with robots, Amsterdam, Netherlands, pp 255–262Google Scholar
  18. Ferrari F, Paladino MP (2014) Validation of the psychological scale of general impressions of humanoids in an Italian sample. In: Workshop Proceedings of IAS-13, 13th international conference on intelligent autonomous systems, Padova. Accessed 15–19 JulyGoogle Scholar
  19. Foerster F, Weiss A, Tscheligi M (2011) Anthropomorphic design for an interactive urban robot: the right design approach. In: Proceedings of the 6th international conference on human–robot interaction. HRI’11. ACM, New York, pp 137–138.  https://doi.org/10.1145/1957656.1957699
  20. Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42(3–4):143–166zbMATHGoogle Scholar
  21. Freeman JB, Ambady N (2009) Motions of the hand expose the partial and parallel activation of stereotypes. Psychol Sci 20(10):1183–1188.  https://doi.org/10.1111/j.1467-9280.2009.02422.x Google Scholar
  22. Freeman JB, Ambady N (2010) MouseTracker: software for studying real-time mental processing using a computer mouse-tracking method. Behav Res Methods 42(1):226–241.  https://doi.org/10.3758/BRM.42.1.226 Google Scholar
  23. Goetz J, Kiesler S, Powers A (2003) Matching robot appearance and behavior to tasks to improve human–robot cooperation. In: Robot and human interactive communication, 2003. Proceedings. ROMAN 2003. The 12th IEEE international workshop on, pp 55–60.  https://doi.org/10.1109/roman.2003.1251796
  24. Haring KS, Mougenot C, Ono F, Watanabe K (2014) Cultural differences in perception and attitude towards robots. Int J Affect Eng 13(3):149–157Google Scholar
  25. Haslam N, Loughnan S, Kashima Y, Bain P (2009) Attributing and denying humanness to others. Eur Rev Soc Psychol 19(1):55–85.  https://doi.org/10.1080/10463280801981645 Google Scholar
  26. Hayashi K, Shiomi M, Kanda T, Hagita N (2010) Who is appropriate? A robot, human and mascot perform three troublesome tasks. In: 19th international symposium in robot and human interactive communication, pp 348–354.  https://doi.org/10.1109/roman.2010.5598661
  27. Heerink M, Kroese B, Evers V, Wielinga B (2009) Influence of social presence on acceptance of an assistive social robot and screen agent by elderly users. Adv Robot 23(14):1909–1923.  https://doi.org/10.1163/016918609X12518783330289 Google Scholar
  28. Hofstede-Insights (2017) United Arab Emirates. Retrieved from Hofstede-insights. https://www.hofstede-insights.com/country-comparison/the-united-arab-emirates/. Accessed 1 Apr 2017
  29. Hussein Z (2009) Introduction to Islamic Art. http://www.bbc.co.uk/religion/religions/islam/art/art_1.shtml. Accessed 1 Apr 2017
  30. Joosse M, Lohse M, Pérez JG, Evers V (2013) What you do is who you are: the role of task context in perceived social robot personality. In: IEEE international conference on robotics and automation, pp 2134–2139.  https://doi.org/10.1109/ICRA.2013.6630863
  31. Joosse MP, Poppe RW, Lohse M, Evers V (2014) Cultural differences in how an engagement-seeking robot should approach a group of people. In: Proceedings of the 5th ACM international conference on collaboration across boundaries: culture, distance & technology. CABS’14. ACM, New York, pp 121–130.  https://doi.org/10.1145/2631488.2631499
  32. Ju W, Takayama L (2011) Should robots or people do these jobs? A survey of robotics experts and non-experts about which jobs robots should do. In: 2011 IEEE/RSJ international conference on intelligent robots and systems, pp 2452–2459.  https://doi.org/10.1109/iros.2011.6094759
  33. Jung EH, Waddell TF, Sundar SS (2016) Feminizing robots: user responses to gender cues on robot body and screen. In: Proceedings of the 2016 CHI conference extended abstracts on human factors in computing systems. ACM, New York, pp 3107–3113.  https://doi.org/10.1145/2851581.2892428
  34. Kanda T, Glas DF, Shiomi M, Ishiguro H, Hagita N (2008) Who will be the customer?: A social robot that anticipates people’s behavior from their trajectories. In: Proceedings of the 10th international conference on ubiquitous computing. UbiComp’08. ACM, New York, pp 380–389.  https://doi.org/10.1145/1409635.1409686
  35. Katz JE, Halpern D (2014) Attitudes towards robots suitability for various jobs as affected robot appearance. Behav Inf Technol 33(9):941–953.  https://doi.org/10.1080/0144929X.2013.783115 Google Scholar
  36. Kuznetsova A, Brockhoff PB, Christensen RHB (2016) LmerTest: tests in linear mixed effects models. https://CRAN.R-project.org/package=lmerTest. Accessed 29 Jan 2017
  37. Lee H, Kang H, Kim MG, Lee J, Kwak SS (2016) Pepper or roomba? Effective robot design type based on cultural analysis between Korean and Japanese users. Int J Softw Eng Appl 10(8):37–46Google Scholar
  38. Leys C, Ley C, Klein O, Bernard P, Licata L (2013) Detecting outliers: do not use standard deviation around the mean, use absolute deviation around the median. J Exp Soc Psychol 49(4):764–766.  https://doi.org/10.1016/j.jesp.2013.03.013 Google Scholar
  39. Li D, Rau PP, Li Y (2010) A cross-cultural study: effect of robot appearance and task. Int J Soc Robot 2(2):175–186Google Scholar
  40. MacDorman KF, Vasudevan SK, Ho CC (2008) Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. AI Soc 23(4):485–510.  https://doi.org/10.1007/s00146-008-0181-2 Google Scholar
  41. Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C (2015) Sacrifice one for the good of many?: People apply different moral norms to human and robot agents. In: Proceedings of tenth annual ACM/IEEE international conference on human–robot interaction. ACM, New York, NY, pp 117–124.  https://doi.org/10.1145/2696454.2696458 Google Scholar
  42. McNeal M (2015) Rise of the machines: the future has lots of robots, few jobs for humans. Wired magazine. https://www.wired.com/brandlab/2015/04/rise-machines-future-lots-robots-jobs-humans/. Accessed 1 Apr 2017
  43. Moniz AB, Krings BJ (2016) Robots working with humans or humans working with robots? Searching for social dimensions in new human–robot interaction in industry. Societies 6(3):23.  https://doi.org/10.3390/soc6030023 Google Scholar
  44. Mori M, MacDorman KF, Kageki N (2012) The uncanny valley [from the field]. IEEE Robot Autom Mag 19(2):98–100Google Scholar
  45. Nomura T, Suzuki T, Kanda T, Han J, Shin N, Burke J, Kato K (2008) What people assume about humanoid and animal-type robots: cross-cultural analysis between Japan, Korea, and the United States. Int J Humanoid Rob 05(01):25–46.  https://doi.org/10.1142/S0219843608001297 Google Scholar
  46. Nomura T, Kanda T, Suzuki T, Yamada S, Kato K (2009) Influences of concerns toward emotional interaction into social acceptability of robots. In: Proceedings of the 4th ACM/IEEE international conference on human robot interaction. HRI’09. ACM, New York, pp 231–232.  https://doi.org/10.1145/1514095.1514151
  47. Oestreicher L, Eklundh KS (2006) User expectations on human–robot co-operation. In: ROMAN 2006—the 15th IEEE international symposium on robot and human interactive communication, pp 91–96.  https://doi.org/10.1109/roman.2006.314400
  48. R Core Team (2015) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. https://www.R-project.org/. Accessed 29 Jan 2017
  49. Riek L, Mavridis N, Antali S, Darmaki N, Ahmed Z, Al-Neyadi M, Alketheri A (2010) Ibn Sina steps out: exploring arabic attitudes toward humanoid robots. In: Proceedings of the 2nd international symposium on new frontiers in human–robot interaction, AISB, Leicester, vol 1Google Scholar
  50. Rosenthal-von der Pütten AM, Krämer NC (2014) How design characteristics of robots determine evaluation and uncanny valley related responses. Comput Hum Behav 36(July):422–439.  https://doi.org/10.1016/j.chb.2014.03.066 Google Scholar
  51. Royakkers L, van Est R (2015) A literature review on new robotics: automation from love to war. Int J Soc Robot 7(5):549–570.  https://doi.org/10.1007/s12369-015-0295-x Google Scholar
  52. Rudman LA, Goodwin SA (2004) Gender differences in automatic in-group bias: why do women like women more than men like men? J Pers Soc Psychol 87(4):494–509.  https://doi.org/10.1037/0022-3514.87.4.494 Google Scholar
  53. Sabanović S (2010) Robots in society, society in robots. Int J Soc Robot 2(4):439–450.  https://doi.org/10.1007/s12369-010-0066-7 Google Scholar
  54. Salem M, Ziadee M, Sakr M (2014) Marhaba, how may I help you?: Effects of politeness and culture on robot acceptance and anthropomorphization. In: Proceedings of the 2014 ACM/IEEE international conference on human–robot interaction. HRI’14. ACM, New York, pp 74–81.  https://doi.org/10.1145/2559636.2559683
  55. Schermerhorn P, Scheutz M, Crowell CR (2008) Robot social presence and gender: do females view robots differently than males?” In: Proceedings of the 3rd ACM/IEEE international conference on human robot interaction. HRI’08. ACM, New York, pp 263–270.  https://doi.org/10.1145/1349822.1349857
  56. Shiomi M, Kanda T, Ishiguro H, Hagita N (2007) Communication robots in real environments. In: Hackel M (ed) Humanoid robots: human-like machines. Itech, ViennaGoogle Scholar
  57. Solon O (2016) Robots will eliminate 6% of all US jobs by 2021, report says. The Guardian, September. https://www.theguardian.com/technology/2016/sep/13/artificial-intelligence-robots-threat-jobs-forrester-report. Accessed 1 Apr 2017
  58. Sorbello R, Chella A, Giardina M, Nishio S, Ishiguro H (2016) An architecture for telenoid robot as empathic conversational android companion for elderly people. In: Menegatti E, Michael N, Berns K, Yamaguchi H (eds) Intelligent autonomous systems 13: Proceedings of the 13th international conference IAS-13. Springer International Publishing, Cham, pp 939–953.  https://doi.org/10.1007/978-3-319-08338-4 Google Scholar
  59. Straub DW, Loch KD, Hill CE (2003) Transfer of information technology to the arab world: a test of cultural influence modeling. Adv Top Glob Inf Manag 2:141–172Google Scholar
  60. Syrdal DS, Dautenhahn K, Woods SN, Walters ML, Koay KL (2007) Looking good? Appearance preferences and robot personality inferences at zero acquaintance. In: AAAI spring symposium—technical report, SS-07-07, Stanford, CA, USA, pp 86–92Google Scholar
  61. Takayama L, Ju W, Nass C (2008) Beyond dirty, dangerous and dull: what everyday people think robots should do. In: Proceedings of the 3rd ACM/IEEE international conference on human robot interaction. HRI’08. ACM, New York, pp 25–32.  https://doi.org/10.1145/1349822.1349827
  62. Tanaka F, Cicourel A, Movellan JR (2007) Socialization between toddlers and robots at an early childhood education center. Proc Natl Acad Sci 104(46):17954–17958Google Scholar
  63. van Straten CL, Smeekens I, Barakova E, Glennon J, Buitelaar J, Chen A (2017) Effects of robots’ intonation and bodily appearance on robot-mediated communicative treatment outcomes for children with autism spectrum disorder. Pers Ubiquitous Comput 22(2):379–390Google Scholar
  64. Woodcock C (2013) Aniconic/aniconism: looking at Mounir Fatmi. https://studylib.net/doc/10574732/. Accessed 1 Apr 2017
  65. Yogeeswaran K, Złotowski J, Livingstone M, Bartneck C, Sumioka H, Ishiguro H (2016) The interactive effects of robot anthropomorphism and robot ability on perceived threat and support for robotics research. J Hum Robot Interact 5(2):29–47Google Scholar
  66. Złotowski J, Strasser E, Bartneck C (2014) Dimensions of anthropomorphism: from humanness to humanlikeness. In: Proceedings of the 2014 ACM/IEEE international conference on human–robot interaction. HRI’14. ACM, New York, pp 66–73.  https://doi.org/10.1145/2559636.2559679
  67. Złotowski J, Proudfoot D, Yogeeswaran K, Bartneck C (2015) Anthropomorphism: opportunities and challenges in human–robot interaction. Int J Soc Robot 7(3):347–360.  https://doi.org/10.1007/s12369-014-0267-6 Google Scholar
  68. Złotowski J, Yogeeswaran K, Bartneck C (2017) Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. Int J Hum Comput Stud 100(April):48–54.  https://doi.org/10.1016/j.ijhcs.2016.12.008 Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Abu Dhabi UniversityAbu DhabiUAE
  2. 2.CITEC, Bielefeld UniversityBielefeldGermany
  3. 3.Queensland University of TechnologyBrisbaneAustralia

Personalised recommendations