With this chapter, we begin the development of a Theory of the Unthinkable. The starting point is the recognition that technology—more accurately, technologists—generally promises more than it can possibly deliver. Indeed, unbridled promises and unbounded enthusiasm are part and parcel of the basic makeup of the inventors of all goods and services. Needless to say, such promises wouldn’t exist were it not for our insatiable need to satisfy our wildest, and endless, dreams and fantasies.

In return for the promise/desire for unrivaled beauty; instantaneous and constant connection with an endless array of friends; unsurpassed and lasting fame; immediate gratification of our every wish and desire; unparalleled popularity; superhuman power and strength; and indeed, defeating death itself,Footnote 1 we’ve sold our souls to the great God—or better yet, Devil—of technology. But it’s a false bargain for technology can never fully deliver, and hence satisfy, our starry-eyed dreams and hopes.

Fundamental to Thinking the Unthinkable is the consideration of as many ways as possible as to how satisfying our unbridled dreams and hopes by means of technology leads to not only severe disappointment and collective distrust, but worse yet, major crises. A major factor is the all-too-common failure to think about the actual contexts in which technologies are used versus the simplified and idealized ones that are typically presumed. We say more about this later in this chapter and the next one as well. This chapter is only the beginning of a Theory of the Unthinkable. We introduce additional key elements as we proceed.

To reiterate, it’s part of the basic nature of technologists—if not the inventors of all goods or services—to see mainly, if not solely, the positive benefits of their marvelous creations and thus to downplay and ignore altogether the negative consequences and disbenefits. Facebook is a prime example par excellence. It was ripe for systematic abuse and misuse. Indeed, if we had deliberately set out to design a specific instrument that could be systemically abused and exploited, we couldn’t have created a better, more powerful, and dangerous mechanism. No wonder why increasingly, Facebook, and in particular its founder, are portrayed in the worst of terms.Footnote 2

Indeed, cartoons in The Week feature absolutely horrid portrayals of Mark Zuckerberg. In one, he’s shown driving a car with the license plate, “Facebook soliciting kids,” along with the caption, “Hey, Little Girl! Twenty Bucks If You’ll Show Me All Your Private Stuff!”Footnote 3 In another, he’s standing between racists, lying pols, Nazis, and Russians along with the caption, “It’s Hard To Deny Friend Requests…”Footnote 4 From the standpoint of Crisis Management, such portrayals are literally the worst nightmare for any individual and organization. They are virtually impossible from which to recover. As such, they are prominent examples of the Unthinkable. Who among us can envision us and our organization being shown in the most disparaging light, let alone how to withstand it?

While not as portentous, consider a more bothersome, and growing problem, electrified scooters. Their primary attraction is not only are they fun to ride, but they offer an easy way to get around crowded urban centers. The problem is that users are not generally responsible when they are finished riding them. Indeed, they are largely irresponsible. They just dump them as they are on sidewalks, thereby creating a hazard for passing pedestrians and especially the elderly. As such, their users are a primary example of arrested development, i.e., young teenagers who are oblivious to their surroundings.

As an important aside, by being largely unconcerned with the potential abuses, misuses, and unintended consequences of their creations, arrested development may be more common than we’d like to admit among young, primarily male technologists. It’s thereby one of the greatest barriers to the establishment of Socially Responsible Tech Companies. It’s further aided and abetted by the tremendous amounts of monies at stake. And, it’s also exacerbated by the tight tech communities, or “bubbles,” in which they work and live that subtly and not so subtly encourage its members to behave and think alike.

To return to scooters, the failure to take into account the actual ways in which they would be used is responsible for the conspicuous threat they pose. No wonder why many cities have considered banning them outright.Footnote 5 The point is that technology always impacts, and in turn is impacted by, the larger social environment in which it operates. The failure is not talking into account the broader contexts in which all technologies are used and impact.

If technologists and tech companies are largely insensitive to the unintended social consequences of their prized creations, how then does one foresee and take appropriate steps to protect us from the dangers that not only are lurking in, but result from all of our creations? By continuously and systematically Thinking the Unthinkable! As we discuss later, it needs to be a fundamental part of a new federal office of socially responsible technology or OSRT. It also needs to be a basic part of the job of the chief technology officer who is responsible for the use and development of technology whether the organization is directly involved with the production of technology or not. Indeed, it’s hard to imagine any organization that is not heavily involved with technology in one way or another. And, the chief officers for Crisis Management and Strategy need to be directly involved as part of a close-knit corporate team.

As we indicated in the Preface, Denmark has taken the groundbreaking and unprecedented step of appointing an ambassador to Silicon Valley to represent its interests against the giant tech companies.Footnote 6 It did this after it determined that “tech behemoths now have as much power as many governments—if not more.”

Thinking the Unthinkable requires us to attempt what few are inclined to do, let alone are capable: to imagine how every one of the proposed positive attributes and benefits of a technology can lead and/or turn into their exact opposite. Such a task is virtually impossible for single individuals to accomplish entirely on their own. Thus, shortly after Facebook’s release—ideally prior to it—it would have required teams of parents, psychologists, teachers, and even kids themselves to envision how it could serve as a major platform for relentless Cyber Bullying. Indeed, in retrospect, such a scenario was virtually guaranteed to occur and, in this sense, was perfectly predictable, that is, if one had wanted to think systematically and comprehensively about such things in the first place.

To reiterate, to foresee, and to guard against the dangers lurking within a technology require us to do one of the most difficult things with which humans are charged: thinking of as many ways as possible to undermine our most cherished creations.

One of the most powerful ways of Thinking the Unthinkable is by uncovering the host of taken-for-granted assumptions that are made about the larger body of stakeholders who use and are affected by a technology, and then challenging those assumptions as ruthlessly as we can. For example, a commonly made assumption about users is that they are conscientious and their intentions are honorable. In short, they want to do the right thing. As a result, they will follow scrupulously the instructions for how to use a technology responsibly. Accordingly, they will not abuse and misuse it for malicious ends like Cyber Bullying or interference in our elections.

Thinking the Unthinkable also requires something even more difficult: the emotional fortitude to face up to tough problems long before they occur. For this reason, one cannot emphasize enough the front-page story in the November 15, 2018, edition of The New York Times that showed how Facebook not only ignored, but deliberately denied clear and persistent warning signals of major problems.

A Theory of the Unthinkable: Part I

The following are some of the key elements of a Theory of the Unthinkable. We introduce additional elements as we proceed:

  1. 1.

    Major unanticipated and/or systematically ignored stakeholders who can and will deliberately cause harm, at the very least interfere with the ideal and idealized aims of a technology;

  2. 2.

    Major idealized attributes/assumptions about principal stakeholders that are later proven to be false;

  3. 3.

    The idealized contexts in which a technology will be used and which later prove to be false as well;

  4. 4.

    Idealized assumptions about the key benefits/properties of a technology and how the exact opposite can and will occur;

  5. 5.

    Idealized assumptions about how one can contain the damage, if any, if a technology fails, and worse, is responsible for widespread harm; how such assumptions are demonstrably false;

  6. 6.

    Idealized assumptions as to why there will not be widespread calls for the strict regulation, and worse, the strict banning and/or elimination of a technology.

We say a word about each.

First, “stakeholders” are all the parties who affect and are affected by the invention, licensing, marketing, operation, regulation, etc., of a technology. In short, they are deeply involved with every aspect and phase of a technology from its initial conception and development, operation, to its eventual obsolescence and replacement. For instance, a basic taken-for-granted assumption is that malicious, nefarious actors will not abuse and/or misuse a technology for their ill ends alone. In other words, the major players/stakeholders are not only known, but benign and responsible.

It’s also generally assumed that the contexts in which a technology will be used are known, and predictable, benign, stable, and under control. For instance, Social Media will not be a major factor in the spread of disinformation, misinformation, hate speech, etc.

For another, technology will work as promised and its benefits are clear and demonstrable. It will certainly not be responsible for major harm or injuries. It will work as planned because its properties are essentially known and predictable. In other words, there will be no major surprises.

Since it’s generally assumed that a technology will work as intended, and its benefits are clear and incontestable, there is essentially no need for backup plans for CM since failure is essentially precluded. In essence, planning for the worst is nothing but a waste of precious time and resources. In the same way, there will not be major calls for its elimination or strict control.

Stakeholders

While we discuss each of the preceding elements in more detail throughout, we want to close this chapter with a more thorough examination of the first factor. In particular, we want to show how one can think systematically about the stakeholders and one typically doesn’t take into account and indeed, often, doesn’t want to consider.

One of the best ways to think about the general kinds of stakeholders that are involved in any and all technologies is by means of expanding circles. In the innermost circle are all those parties who are involved in the creation, direct financing, leadership, and marketing of a technology, product, or service. It also includes all the immediate employees and supervisory personnel in the operations of a company.

In essence, assumptions are the presumed properties of stakeholders, what they are like and how they will behave. Thus, it’s commonly assumed, if not taken for granted, that one’s primary innermost stakeholders are capable, dedicated, dependable, honest, intelligent, loyal, trustworthy, etc. In other words, not only are they not malicious, but they are a demonstrable and valuable asset. They certainly will not cause or be responsible for harm in any way. Sadly, this is often not the case with disgruntled or disaffected workers who have engaged in workplace sabotage and violence. The trick is to be on the constant lookout for malicious internal actors without creating an oppressive/Orwellian-type workplace or culture that actually encourages precisely that which one wishes to prevent. One of the best, but not perfect, ways of doing this is to involve employees in how they will “police themselves.”

In a later chapter, we explicitly examine the notion of harm. If they even consider it at all, most companies take the avoidance of harm for granted. Instead of considering it explicitly, especially from the standpoint of Ethics, it’s “under the radar screen.”

The next, outermost circle is composed of all those parties immediately external to an institution or organization, e.g., competitors, the media, regulators, and most of all, the direct users of a technology, product, etc. Once again, it’s commonly assumed that users are competent, intelligent, etc., so that they will use a technology responsibly as intended. They will certainly not abuse or misuse it. Worst of all are external malicious actors who are deliberately out to cause as much harm as possible through the improper/unintended use of a technology and whom rarely are considered explicitly and systematically.

This latter circle also includes the most vulnerable users and those who are most affected by a technology. Namely, what is benign to one party may be harmful to others. Similarly, not all are equally affected by what are deemed unintended consequences and side effects. In other words, what’s a positive benefit to one class of stakeholders is not necessary to others. We talk more later about a specific process for addressing this important issue.

Outer circles contain foreign actors, governments, etc.

Many more kinds of stakeholders exist that are not typically included when planning for a product or service. The following is a specific example of how they not only need to be addressed, but can be.

A company that produced a product that was intended only for adults discovered that young children were using it in ways that raised concerns. Somehow or another, the company realized that if young kids were using their product, then they needed to hire an expert in child development to give them continuous ongoing advice of how they could redesign the product in ways so that it would not pose an immediate danger to kids. Given that all technologies now affect children and in essence are used by everyone, all tech companies need to expand the base of their “normal personnel.”

In addition to considering explicitly as many parties as possible that are involved in the invention, promotion, distribution, etc., of a technology or product, one especially needs to consider those who will use it to further nefarious ends. Stakeholders and the associated assumptions we are making about them are among the most important factors in Thinking the Unthinkable.

A comprehensive list of stakeholders while extremely important is only one factor in Thinking the Unthinkable. We not only need to say more about all of the other elements, but even more need to demonstrate a process that will help ensure that we have given them the serious consideration they require and, thus, that they will be acted upon in the right ways to ensure the health and well-being of all those who are affected in one way or another by technology.