“The Sandman” is the oldest tale in this book. Hoffman published it two centuries ago, in 1817, in a collection entitled
Nachtstücke (or Night Pieces). The writing shows its age, and the protagonist Nathaniel is unlikely to appeal to modern readers. And yet of all the stories in this book “The Sandman” touches most directly on contemporary concerns.
In Hoffman’s tale, Nathaniel falls in love with a beautifully constructed automaton—what we would now call an android, a robot with a human form. It’s unclear whether Nathaniel’s infatuation with Olimpia is merely evidence of the insanity that ultimately leads to his suicide or whether it exacerbates his unstable mental state. The reaction of the other characters to Olimpia is, I believe, more interesting. Although they sense something strange and unlikeable about her—she is the antithesis of Nathaniel’s fiancée, the intelligent and practically minded Clara—they don’t immediately identify her as an automaton. When the truth about Olimpia comes out, however, people start doubting their own powers of discrimination. Hoffman thus raises numerous profound questions. Can one fall in love with an android? What sorts of human–android relationships will society find permissible? Will we always be able to distinguish human from android?
Asimov, in his famous series of positronic robot stories, examined such questions from a variety of angles. Asimov, however, was an SF writer attempting to imagine the distant future. Even the most technologically optimistic among us would agree we are still decades away from developing an android with the same wisdom and charisma as Asimov’s most successful robot, Daneel Olivaw. So at first glance it might seem reasonable to categorise these questions as being of interest only to the readers and writers of SF. But that would be a mistake. Our whole society should be grappling with them
Engineers can’t yet manufacture a true android—a robot that looks, moves, and sounds human. AI experts can’t program a robot with the general intelligence necessary to navigate the world in all its messy complications. Psychologists can’t define the quality of “humanness”—a property we all recognize in each other—let alone tell engineers how to imbue a machine with this quality. With the current state-of-the-art, devices are either clearly robots (acting efficiently in one particular niche, but with little attempt made to make them humaniform) or else clumsy constructs that might possess the surface appearance of a person but are not much more sophisticated than a waxwork dummy. In either case, most humans immediately know they are looking at non-humans. But advances in engineering, AI, and psychology have already enabled the construction of machines—androids or robots—that are “good enough” for some people to form an attachment to them.
Our evolutionary history has hard-wired our brains into attributing agency to objects that move in particular ways. This makes perfect sense. None of our ancestors would have gained an advantage by trying to second-guess the motivations of a rock rolling down a mountainside, but they
would have found it useful to apprehend the mental state of an approaching tiger. This emotional hard-wiring is so powerfully ingrained that some people can form an attachment to anything. Consider, for example, an experiment carried out in 2011 by the computer scientists John Harris and Ehud Sharlin.
Harris and Sharlin sat their human subjects in a room, with only a stick of balsa wood for company. The stick was in fact a simple robot: it was attached to a gearing mechanism through which a human operator (hidden from view) could use a joystick to make the balsa wood move in certain ways. Now, you’d think people would call a stick a stick. But a majority of subjects chose to attribute agency to the stick: they perceived it as possessing goals and internal thought processes. They tried to predict what the stick was “thinking” based on how it moved, just as our ancestors might have attributed a state of mind to a passing wild animal.
This experiment demonstrates that we shouldn’t be surprised if people start forming attachments to androids. The androids don’t need to be a perfect facsimile of the human form in order for emotional attachment to occur. After all, the androids will appear far more human than does a stick of balsa wood. Indeed, people are
forming attachments to less-than-perfect androids. Consider, for example, the Telenoid (see Fig.
A collection of early Telenoids (Credit: Osamu Iwasaki)
The Japanese roboticist Hiroshi Ishiguro developed the Telenoid—an 80 cm, 5 kg droid made of silicone rubber—in 2010. The Telenoid has shortened, stubby arms and, instead of legs, a rounded stump; the skin-colored surface is perfectly smooth; deep-set, jet-black eyes stare out from a face that’s calm and untroubled. Personally, I find the Telenoid creepy. But that’s not how dementia sufferers react.
For elderly dementia sufferers, a Telenoid is something to be cradled, rocked, cared for. This gives them some measure of peace. Presumably these patients don’t appreciate how the Telenoid’s movement is controlled by a human using teleoperation, nor that what they hold in their hands is rubber rather than flesh. Caregivers describe how their patients lavish affection on a Telenoid. Indeed, those caregivers have discovered they don’t need to provide patients with a sort-of-humanoid baby in order to see the same beneficial effect: the Paro therapeutic robot looks like a fluffy white seal and many dementia sufferers treat it with as much affection as they would a real pet.
In both these cases the innate human capacity for empathy is being purposefully manipulated—but does this matter? Is it of any importance
how a Telenoid pseudo-baby or a Paro fake seal gives people comfort, so long as those people feel happier? Or should we be concerned about the ethics of outsourcing such an important task—providing emotional support to human beings—to robots?
Robots such as the Telenoid or the Paro have specialist applications in care settings, but most members of the general public are unlikely to find these devices more than momentarily diverting. The problem is these robots are
. Although they are effective at eliciting an emotional response they lack the general intelligence necessary to keep our attention. If a youngster received one for a Christmas present then by Boxing Day the device’s limited repertoire of sayings and behaviours would be exhausted; by New Year those sayings and behaviours would have changed from “cute” to “irritating”; by Easter the device would be gathering dust. Even the much-hyped
humanoid (see Fig.
), which since its activation in 2015 has received the sort of media attention usually reserved for human celebrities, is essentially just a mobile chatbot. But that’s the state of the art
. As we saw in the previous chapter, AI is advancing rapidly. At some point—a few decades from now, perhaps—a program capable of conversing meaningfully with people will be running inside an android capable of eliciting empathy while navigating the messy, ever-changing, everyday world. Such an android wouldn’t be like Hoffman’s Olimpia; it would be much more like Asimov’s Daneel Olivaw.
Sophia—a humanoid said to be modeled upon Audrey Hepburn (though personally I don’t see the resemblance). Sophia can mimic certain human gestures and can converse simply on a limited number of predefined topics (Credit: ITU Pictures)
These androids of the future could be constructed not only to negotiate the human world but also to be better looking than humans. (Different people of course define “better looking” in different ways. But that won’t matter: engineers will be able to manufacture an android to align with any particular person’s definition of “beautiful”.) An android’s skin can be made purer than a human’s skin; an android won’t suffer from disease; an android won’t get tired. An android could be designed to be extremely attractive to you. If such an android told you “I love you”—how would you react?
From our vantage point, early in the 21st century, the thought of someone entering into a serious relationship with an inanimate object is likely to produce sniggers. At present, the so-called “sexbots” some people fret about are hardly more advanced than blow-up rubber dolls. But that’s the situation
now. By the middle of the century there might be androids specifically manufactured with attractiveness and beauty in mind, androids capable of meaningful conversation and interaction. Occurrences of human–android relationships would be inevitable—it goes against what we know of human nature to suppose human–android relationships won’t develop.
The questions raised by such relationships are legion. How would such relationships change human society? Might people prefer to partner with perfect androids rather than with imperfect humans? And if an android
acted as if it had feelings and said it had feelings would we accept it did have feelings? (After all, the Turing test suggests that if a robot acts as if it were thinking then we should accept it is thinking. Shouldn’t we grant the same when it comes to expressing emotions?) And if we argue that androids feel things merely because they are programmed so to do, does it follow that human emotions are also merely a matter of programming? Human programming might involve hormones and neurons rather than lines of code and transistors, but does that make a difference? And if we say it doesn’t make a difference, must we then change how we view ourselves—are humans and androids nothing more than computers, the former based on carbon, the latter based on silicon? In that case, should androids themselves have a say in all of this?
Isaac Asimov thought about these questions perhaps more broadly than anyone else in the 20th century. In his novels and short stories he examined numerous possible answers. Two of his robot novels from the 1950s presented two opposing views of how humankind might react to technically advanced androids. In
The Caves of Steel Asimov imagined a future Earth on which people have chosen to ban all use of robots. Billions of humans are crowded together, living in their caves of steel, coping as best they can with resources made scarce by overpopulation. In this future, technophobia wins out—and the result is not pleasant (although for the people living in this society it of course seems perfectly natural). His companion novel The Naked Sun is set in the same time period, but the action is located on the planet Solaria. Solarian society is in many ways the polar opposite of Earth’s: the human population is strictly controlled at 20,000 but for every person there are ten thousand robots. People are taught to avoid personal contact. They live alone on vast estates and choose to “view” rather than meet one another. For these people the issue of procreation is thoroughly embarrassing, and babies are born in “birthing centres”. This is a future in which technophilia wins out—and the result is equally unpleasant (although again for the people living in this society it all seems perfectly natural). Elements of either of these futures seem plausible. If robots cause mass unemployment and trigger social changes the majority of people find offensive then perhaps their use will be curtailed. If androids turn out to be pleasant company, and a useful addition to our society, then perhaps they’ll become our helpmates.
Asimov foresaw other possibilities, though. Perhaps the future belongs neither to a humans-only world nor one dominated by androids but to a world in which humans and androids merge. A few individuals have already taken tentative steps towards such a future. People have had implants fixed to enable them to “hear” colour or “feel” earthquakes; they have fitted prosthetic arms and legs to improve their movement; they have inserted subcutaneous biometric chips so their body status (blood pressure, stress level, and so on) is transmitted to their internet-connected environment, which can be programmed to respond appropriately. Could it be that advances in digital technology, combined with the advances in genetic technology we discussed in earlier chapters, lead not to androids with their silicone perfection but to cyborgs with bizarre and outlandish forms?
The options outlined above belong to a future many of us won’t live to see. Right now, however, it’s already possible to discern the ever-increasing influence of digital technology on our society. As humans move to an increasingly internet-connected world we are producing vast amounts of “digital exhaust”—vast data sets that our current, clumsy versions of AI are sifting through and thereby producing useful information in a way that’s impossible for human workers. Androids and cyborgs are for tomorrow. Big data is changing the world today. And big data, in a sense, is the theme of Chapter