Given what has been explored in the preceding chapters, your first instinct may be to panic. I would encourage you not to; at least, not yet. There’s a lot we can do to shift the course of history and therefore a lot of cause for hope. If we panic, hope and excitement get lost in the shuffle of fear, chaos, and cortisol , which makes it much harder to thoughtfully and meaningfully take action. So let’s take a big relaxing breath and remember, as eBay founder Pierre Omidyar is fond of saying, “while change is certain, the direction is not.”Footnote 1 It is completely reasonable to believe we can still chart a new course and steer the tech industry, and the market forces that direct it, in a more socially conscious direction.

It has long been my contention that a lack of emotional intelligence is at the heart of the vast majority of Silicon Valley’s problems. A lack of emotional intelligence is not a diagnosable problem. You will never go to rehab, have an intervention, or present at the emergency room for being emotionally unintelligent. That’s not to say, however, that emotional unintelligence can’t affect your life in profound ways. Emotional unintelligence may mean you find yourself unable to connect with or understand others, control your emotions, retain employees, or have lasting and emotionally fulfilling relationships. A focus on developing what we might think of as more traditional markers of intelligence—rationality, problem-solving , analytical reasoning—often neglects more emotional and social types of intelligence. This type of thinking is particularly prominent in tech and has caused the industry to elevate the perceived importance of certain characteristics and skills while ignoring others. While the industry is not psychologically unwell, per se, it is profoundly lopsided.

Have you ever counted the number of times Zuckerberg says “I think” in an interview? Speaking from personal experience, and many hours in front of YouTube tallying Zuck’s “thinks” and “feels,” I can confirm it’s a lot—enough to both ensure an excellent drinking game and make you question if the Facebook CEO ever gets the feels. In 2018, Kara Swisher , founder of Recode, interviewed Zuckerberg about how his company’s many controversies, particularly around privacy and the mishandling of data, had affected him personally.

Kara Swisher: Can I ask you that, specifically about Myanmar? How did you feel about those killings and the blame that some people put on Facebook? Do you feel responsible for those deaths?

Mark Zuckerberg: I think that we have a responsibility to be doing more there.

Kara Swisher: I want to know how you felt.

Mark Zuckerberg: Yes, I think that there’s a terrible situation where there’s underlying sectarian violence and intention. It is clearly the responsibility of all of the players who were involved there. So, the government, civil society, the different folks who were involved, and I think that we have an important role, given the platform, that we play, so we need to make sure that we do what we need to.

Whenever Swisher asks a question about how he feels, even when she presses repeatedly and explicitly asks him to identify a feeling, Zuckerberg invariably answers in terms of what he thinks. She tries again later in the interview, this time in the context of Facebook’s social responsibility , Zuckerberg’s leadership role, and the lack of awareness plaguing the industry.

Kara Swisher: An issue I’ve talked about a lot is Silicon Valley’s responsibility, and taking responsibility. And taking responsibility of your dark things, and not being quite as optimistic, and a lot of people here have a problem with looking at that. How do you look at your responsibility, as a leader? As a leader of a massive company with enormous power?

Mark Zuckerberg: I think we have a responsibility to build the things that give people a voice and help people connect and help people build community, I think we also have a responsibility to recognize that the tools won’t always be used for good things and we need to be there and be ready to mitigate all the negative uses….

Kara Swisher: Yeah. How does that feel personally?

Mark Zuckerberg: I mean, personally, my take on this is that for the last 10 or 15 years, we have gotten mostly glowing and adoring attention from people, and if people wanna focus on some real issues for a couple of years, I’m fine with it.Footnote 2

In the course of the interview, which lasts over 80 minutes, Swisher says “feel” four times and “think” twice; Zuckerberg says “feel” once and “think” 28 times.Footnote 3 Zuckerberg’s tendency to prioritize thinking over feeling is indicative of a larger pattern of reasoning and deduction that demonstrates the cognitive lopsidedness of the tech industry. What began as a questionable pronouncement about the skills necessary for engineering gave us an industry flush with a single, circumscribed type of Zuckerberg-esque intelligence. By shaping the narrative that successful engineers like puzzles but not people, psychologists William Cannon and Dallis Perry laid the foundations for an industry that would, decades later, find itself profoundly unbalanced and psychologically bankrupt in terms of its emotional intelligence .

The products, priorities, and behaviors of many companies and individuals within the tech community are indicative of an industry that does not understand the importance of emotional intelligence —or perhaps does not even understand the concept itself. Where IQ represents one’s intelligence in terms of reasoning ability (as measured by problem-solving tests), one’s EQ, or emotional quotient, measures the capacity for emotional intelligence . Emotional intelligence is defined as “the capacity to be aware of, control, and express one’s emotions, and to handle interpersonal relationships judiciously and empathetically.”Footnote 4 According to expert Daniel Goleman , emotional intelligence can be broken down into five core skillsets: self-awareness , emotional control , self-motivation, empathy and relationship skills.Footnote 5 While no one could accuse Silicon Valley of lacking self-motivation (albeit, at times, motivation of a morally questionable variety), industry execs’ capacity for self-awareness , emotional control , empathy , and social skills leave a lot to be desired. This widespread lack of emotional intelligence in Silicon Valley has precluded a more holistic and sophisticated cognitive approach that embraces both rational and emotional skillsets, the effects of which have begun to materialize.

Self-awareness

James Hollis, a rather brilliant psychoanalyst, once wrote that “no prisons are more confining than the ones of which we are unaware.”Footnote 6 The first step to shift either a personal or cultural narrative in a more positive direction is to grow our awareness . Self-awareness can be broken down into two categories: internal self-awareness , which “represents how clearly we see our own values, passions, aspirations, fit with our environment , reactions (including thoughts, feelings, behaviors, strengths, and weaknesses), and impact on others;” and external self-awareness, which demonstrates an understanding of “how other people view us, in terms of those same factors listed above.”Footnote 7 Research has shown that increasing awareness of ourselves and others can increase empathy, creativity , and self-control, and can help us navigate the world in a more informed and conscious way.Footnote 8 A 2015 study found that self-awareness is also associated with improved communication, better leadership, and a greater appreciation of diversity Footnote 9—all of which could stand to be disrupted in the tech industry.

M.G. Siegler has lamented what he describes as a “complete and utter lack of self-awareness” demonstrated throughout the industry, and by many of the industry’s most prominent leaders, which Siegler argues are indicative of a larger pattern of obliviousness in Silicon Valley characterized by arrogance , insularity , and an abdication of responsibility .Footnote 10 Nick Thompson and Fred Vogelstein explain how Facebook’s handling of the Cambridge Analytica scandal, for example, in which Zuckerberg denied and downplayed the situation, was rooted in an ignorance of the company’s true impacts, combined with a rejection of any liability: “Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal…. Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed.”Footnote 11 This example illustrates a profound lack of both self-awareness and cultural awareness on Zuckerberg’s part, as well as an abdication of responsibility , the combination of which proved disastrous to Facebook’s public image. What began as a multi-year apology tour has devolved into congressional and parliamentary hearings, wherein Zuckerberg and Sandberg have been forced to assume responsibility for the company’s actions and awkwardly and vaguely promise to do better. Facebook is not the only company that has failed to maintain a modicum of awareness. Twitter and Google have come under increasing scrutiny for their handling of customer data, anti-competitive practices, and effects on users’ wellbeing ; Amazon and Tesla have been forced to acknowledge their substandard treatment of employees; and the industry as a whole has been forced to reckon with its lack of diversity and inclusion . Despite the difference in the nature of these transgressions, the psychological quality that connects them is the same. A lack of understanding, or perhaps a wilful ignorance of the emerging issues and challenges created by their products, services, and business practices have rendered the industry increasingly unaccountable, untrustworthy, and profoundly unaware.

What, then, is the answer to increased awareness in Silicon Valley? How do we begin to even out the mental lopsidedness of the tech mindset before the industry implodes into a fire of arrogance and socially unaware, morally reprehensible behaviors? According to Ted Chiang, the answer is the same as it would be for anyone seeking psychological growth: we increase the capacity and capability for psychological insights . Chiang explains that,

[i]n psychology, the term insight is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking.Footnote 12

Increasing one’s sophistication of thought to include self-reflection is a relatively straightforward process. It is not, however, easy, particularly when the insights one is forced to reckon with include the propagation of economic inequality , job displacement, the undermining of democracy , the rise of misinformation, and, in the case of Facebook, the fact “that the machine [they’ve] built to bring people together is being used to tear them apart.”Footnote 13

Self-reflection and insights are, more often than not, a result of our experience with others. We have evolved to be highly social creatures, and our capacity to change is a highly collaborative process, often derived from our interaction with others, either in the form of feedback, criticism, or disagreement.

Sometimes insight arises spontaneously, but many times it doesn’t. People often get carried away in pursuit of some goal, and they may not realize it until it’s pointed out to them, either by their friends and family or by their therapists. Listening to wake-up calls of this sort is considered a sign of mental health .Footnote 14

A barrier to this process that often arises in Silicon Valley, particularly around executives with high degrees of power, is an insularity of thought and resistance to feedback. James O’Toole, a business professor at the University of Denver, who specializes in leadership, ethics, and corporate culture, relates this back to the paradox of power : as an individual’s power grows, his willingness to listen and capacity for empathy shrink, problematizing the feedback loop and the cultivation of self-awareness.Footnote 15 At Facebook, for example, tech journalist Salvador Rodriguez interviewed over a dozen former employees, who said the environment was one in which they were discouraged from speaking up, which caused the problems they saw to go unchecked and proliferate. Some employees likened the company to a “bubble” and a “cult” and said there was no option for employees other than to pretend they loved working there.Footnote 16 Not surprisingly, employee confidence fell over 30 percentage points between 2017 and 2018, according to internal employee surveys.Footnote 17

In the short-term, then, we cannot put the onus of responsibility solely on tech companies and executives, many of whom will lack the toolkit to look either inwardly or critically. Growing the qualities necessary to enrich the industry’s self-awareness will require building a culture of continual self-improvement and prioritizing qualities such as humility, collaboration , and reflection. Simultaneously, the public, government, journalists , and academics alike must point to the behaviors and norms of tech companies that fail to meet either the ethical or legal standards expected of them. As technology moves forward and the stakes become higher—highly capable AI , cyber warfare , deepfakes , mass automation, DNA modification—a willingness to learn about, draw attention to, and engage creatively with threats and social challenges, such that potential risks are mitigated in advance rather than rectified and apologized for after the fact , will hinge on improving our collective awareness, both within and outside of the tech community .

Emotional Control

Closely related to the subject of self-awareness is the concept of emotional control . Emotional control is a marker of emotional intelligence which is demonstrated by the capacity for self-discipline in relation to one’s words and actions. While Silicon Valley’s lack of emotional control doesn’t manifest as overtly as its systematic lack of self-awareness , the industry’s failure to self-regulate is hugely problematic. This can be seen in the behaviors of companies and executives who repeatedly fail, according to author Ted Chiang, to “tak[e] a step back and [ask] whether their current course of action is really a good idea.”Footnote 18 We may not be sensible all the time, but being able to exercise impulse control is a hugely useful quality, which a subset of tech executives appear to lack.

There is no shortage of examples in Silicon Valley of what can happen when one’s ego is disproportional to one’s capacity for self-control. A series of cultural missteps, imprudent business decisions, impulsive emails, and shouting matches, eventually cost Uber CEO Travis Kalanick control of the company he built. Elon Musk’s lack of self-control has been similarly visible, primarily in his endless string of bizarre and seemingly spontaneous tweets, which range from calling British rescue worker Vern Unsworth a pedophile, to claiming Tesla was going private, a false statement that resulted in Musk stepping down as chairman of the company and a lawsuit from the S.E.C. accusing Musk and Tesla of securities fraud. Kalanick and Musk are bold thinkers who took on important social problems, such as transportation , electronic banking, and reducing carbon emissions; however, they have also demonstrated an inability to self-regulate. Executives of any company in any industry would do well to remember the importance of understanding and mediating one’s emotional reactions.

Personality has three main parts: (1) the receiving portion (receptors) that looks out on stimuli (attention and appreciation are its great functions); (2) a responding side (effectors) that looks toward behavior or response; and (3) that which lies between stimulus and response whose function is to correlate and adjust behavior to stimulus. This third region is where our real personal values lie. This is where we grow most.Footnote 19

Emotional control is a marker of both psychological maturity and emotional sophistication. In a time where the industry is having difficulty comporting itself appropriately, it would behoove Silicon Valley to encourage self-awareness and emotional regulation, particularly among its leadership.

Social Skills

In addition to self-awareness and emotional control , two final components of Goelman’s model of emotional intelligence include social skills and empathy . Social skills are relatively self-explanatory: our interactions with others are marked by both verbal communication and non-verbal forms of communication , which can either facilitate connection or inhibit it. Verbal communication includes things like our tone, words, and pace of speech, while non-verbal communications includes things like our body language, gestures, and eye contact. Both verbal and non-verbal communication include acts of reinforcement, such as nodding, “mmm-hmm-ing,” and warm facial expressions, which serve as an acknowledgement of others and build rapport by facilitating a sense of reciprocity in conversation. Individuals with good social skills are often adept at mirroring others, active listening, and adjusting their actions and words in relation to others; their conversations are more likely to flow and they are more likely to instill a sense of connection in their interactions. Those with fewer social skills are more likely to be experienced as awkward and may leave those they speak to feeling confused, unheard, or frustrated.

The tech industry is many things, but socially gifted is not one of them. Indeed, the awkwardness of the industry is as intrinsic to its identity as its ability to code, love of scooters and hoodies, and proclivity for delivery apps of all kinds. Women who date in the Bay Area, where there are a comparatively high number of single men, have a saying that captures the tech demographic, which comprises a substantial part of the dating pool: “the odds are good, but the goods are odd.” While there are plenty of lovely, warm people in tech, the awkwardness that plagues a large subset of the industry tends to be constellated around a lack of social, interpersonal, and relational skills. This may manifest in an inability to communicate in a socially normative way (lack of active listening or talking too much or too little), missing social cues, or a lack of interpersonal gestures of recognition (eye contact, nodding, etc.).

While some (including yours truly) find this quality of Silicon Valley by turns endearing, amusing, and weirdly attractive, the ability to competently understand and communicate with others has important implications not only for our relationships, but also for society more broadly. Social skills encourage strong relationships, facilitate learning, build trust, compassion , collaboration , and a sense of mutuality between oneself and others. Social intelligence, aside from making our lives easier when it comes to interacting with others, enables us to consider the implications of our actions and make better, more socially-minded decisions.

Empathy

Empathy is a more specific type of interpersonal skill. Where sympathy is a feeling of pity or sorrow for someone’s circumstances or misfortune, empathy is the capacity to understand and share someone’s feelings by entering imaginatively into their experience.Footnote 20 Perhaps more than any other type of emotional competence, empathy helps us form bonds and positive relationships by allowing us to better appreciate the experiences, emotions, and perspectives of others.Footnote 21

Two experts on the subject of empathy, Peter Bazalgette and Simon Baron-Cohen , suggest this particular emotional skill may have even more pronounced and extensive impacts than more general social competence. Bazalgette calls empathy “a fundamental human attribute, without with mutually cooperative societies cannot function,”Footnote 22 while Baron-Cohen argues empathy is “the most valuable resource in our world.”Footnote 23 Bazalgette and Baron-Cohen’s arguments are supported by dozens of studies that illustrate the extent and range of positive impacts of empathy on society, including a 2011 study linking empathy to prosocial behaviors.Footnote 24 A separate study published the same year linked the neurobiological mechanism of empathetic behavior to human evolution, suggesting we have evolved to be empathetic creatures.Footnote 25 It is not an exaggeration to say that empathy and social perceptiveness are highly correlated to our success as a species.

The years I’ve spent studying the tech industry have proven, again and again, how exceptionally talented the men and women who work in Silicon Valley are. Entrepreneurs envision solutions to problems most of us don’t even know exist, like identifying homoglyphs or cryptographic signing of software; engineers consistently build technically beautiful products, underpinned by elegant code that makes everything from thermostats to email to electric vehicles function seamlessly and securely. It is a place populated by truly intelligent people, who happen to conceptualize intelligence in a very specific way: as a blend of cognitive skills that center predominantly on logic, inference, and problem-solving . While these skills are practically useful, particularly in engineering and entrepreneurship , they do not capture the full range of human mental abilities, including those rooted in social and emotional competence.

In a 2016 article for The New Yorker, Om Malik argued that “Silicon Valley’s biggest failing is not poor marketing of its products, or follow-through on promises, but, rather, the distinct lack of empathy for those whose lives are disturbed by its technological wizardry.”Footnote 26 While technological change is typically associated with progress , Malik points out that new technology also represents the displacement of jobs and the destruction of legacy industries, on which many people rely for both their livelihoods and their identity . The lack of empathy for the disruption its own progress causes normal people is central to what Malik views as the industry’s biggest problem of emotional unintelligence.

My hope is that we in the technology industry will … try to understand the impact of whiplashing change on a generation of our fellow-citizens who feel hopeless and left behind…. when you are a data-driven oligarchy like Facebook, Google, Amazon , or Uber, you can’t really wash your hands of the impact of your algorithms and your ability to shape popular sentiment in our society. We are not just talking about the ability to influence voters with fake news . If you are Amazon , you have to acknowledge that you are slowly corroding the retail sector, which employs many people in this country.Footnote 27

For many, the increasing speed of technology changes the fabric of the world they know and understand, leading them to feel not only that they are being left behind, but that their identity no longer has meaning.

It is time for our industry to pause and take a moment to think: as technology finds its way into our daily existence in new and previously unimagined ways, we need to learn about those who are threatened by it. Empathy is not a buzzword but something to be practiced.Footnote 28

Malik believes it is important the tech industry acknowledges the role it has played in leaving a large segment of the population both economically and ideologically behind. A failure to do so, he warns, will leave Silicon Valley “an even bigger villain in the popular imagination, much like its East Coast counterpart, Wall Street .”Footnote 29

There are many theories as to why Silicon Valley might lack empathy, which include the financial success , insularity , and hierarchical nature of tech companies. Malik suggests that the industry’s focus on profits , growth, and engagement have decreased the likelihood that they will pause to consider the social effects their products, services, and business models have on their customers or society. Another factor that may feed Silicon Valley’s empathy deficit is its well-documented insularity . Those who work in tech’s homogenous culture, Malik explains, may “lack the texture of reality outside the technology bubble.”Footnote 30 In a workforce that lacks diversity , there are simply fewer divergent perspectives available, which means the industry as a whole may lack the requisite range of experience not only to solve the problems it faces, but also to creatively address the issues that require more developed emotional awareness. Studies have repeatedly shown that a lack of diversity leads to decreased cognitive flexibility and diminished creativity , while exposure to different types of people and experiences lead to creativity , more sophisticated thinking, and increased levels of empathy .Footnote 31 , Footnote 32 A final barrier to the industry’s empathy problem, according to Ben Tarnoff , is the hierarchical management arrangement of many Silicon Valley tech companies. Even if individuals do express empathy for their end users, Tarnoff explains, a majority of tech corporations are arranged in such a way that there is often “no mechanism by which they can really act on it. There are severe limitations on what an individual worker can do in these firms.”Footnote 33 The systematic repression of employee feedback in certain Silicon Valley companies complicates the problem of emotional intelligence in tech by cutting off a potentially vital line of insight into product design, making it more difficult to effectively mitigate against unempathetic practices, products, and outcomes. Whatever the reason, many within the industry have begun to recognize and lobby for increased empathy, including engineers Clementine Pirlot and April Wensel, who have made compelling arguments for instilling more compassion , empathy, and emotional intelligence in tech.Footnote 34

Leadership

Changes to the industry’s cultural priorities will not be realized without the guidance of exceptionally competent, courageous, and emotionally intelligent leadership. The current climate of cultural uncertainty and chaos, of being unmoored from a world order whose trajectory only a decade ago felt largely predictable, requires leaders who are not only visionaries but also take the more nuanced responsibilities of leadership seriously. Successful leaders help people feel more hopeful, secure, and cared for, and also more “anchored, resilient, and propelled” into a better future, according to author and journalist Thomas Friedman.Footnote 35 A leader, according to Umair Haque, “is someone who takes people, and the world, forward, inward, and upward — not backward.”Footnote 36 Good leaders are consistent, honest, and responsible; they demonstrate transparency and integrity; show up; and define the environment and priorities of their company or industry.

While there are certainly glimpses of inspiration to be found among Silicon Valley’s leaders—Jaron Lanier , Dave Coplin, Tim Berners-Lee , Reed Hoffman, Tim Cook, and Marc Benioff , to name a few—much of Silicon Valley appears to be experiencing a leadership drought. While a subset of leaders aim to uphold the original intentions of the tech industry, which focused on openness, sharing, and advancing a shared humanist vision, a competing set of more corporate priorities have consumed the attention of many Silicon Valley execs . As these priorities—profit , market dominance, and shareholder maximization—have woven their way into the collective psyche of the tech industry, the original values that defined this inspired, intelligent, and irreverent community have been overshadowed by more pressing financial objectives, and in many organizations have vanished entirely. Facebook co-founder Chris Hughes explains that the influences of the technocapitalist objective in Silicon Valley will almost always trump the social aims and values its leaders profess. Hughes has been dismayed to find that the leaders of most tech companies “prefer to focus on the bottom lines of their companies rather than also talk about their companies’ relationship to their workers and society.”Footnote 37 Herein lies the problem with entrusting the future to the current leaders of Silicon Valley : the values of technocapitalism are not the values that will make the world a better place ; they are the values that will line the pockets of those who hold the most stock in the biggest companies.

Matt Rosoff, the editorial director of technology at CNBC, traces Facebook’s current existential and PR crises back to the troubling lack of leadership displayed by Zuckerberg and Sandberg. Rather than honestly and openly addressing the very real problems on the platform, the company’s “top execs are selling, spinning and staying silent. That’s not leadership. And when leaders fail to lead, companies fail.”Footnote 38 Frederic Filloux, a professor of journalism at the Paris Institute of Political Studies, has compared the leadership at Facebook to an authoritarian system, noting the company shares the same building blocks as a dictatorship, including strong ideology, hyper-centralized leadership, a cult-like environment , a desire to control all aspects of society, and little tolerance for dissenting opinions. Filloux explains these qualities inhibit Facebook from effectively addressing problems like misinformation, as its true motivations are financially driven and its leadership remains centralized with Zuckerberg . “Facebook’s DNA is based on the unchallenged power of an exceptional but morally flawed — or at least dangerously immature — leader who sees the world as a gigantic monetization playground.”Footnote 39 Filloux’s point was illustrated at Facebook’s 2019 annual shareholder meeting, wherein 68% of external shareholders voted to fire Zuckerberg from the company’s board and hire an external chairperson. As Zuckerberg holds approximately 60% of the voting power at Facebook, however, no one but Zuckerberg can move to vote Zuckerberg out.Footnote 40

Rosoff argues that, while Zuckerberg and Sandberg have been given multiple opportunities to course-correct and assume accountability for their actions, at every turn they have failed to own their responsibility , demonstrate humility, and instill better values in their organization.

Facebook is facing an existential test, and its leadership is failing to address it. Good leaders admit mistakes, apologize quickly, show up where they’re needed and show their belief in the company by keeping skin in the game. Facebook executives, in contrast, react to negative news with spin and attempts to bury it. Throughout the last year, every time bad news has broken, executives have downplayed its significance. Look at its public statements last year about how many people had seen Russian-bought election ads—first it was 10 million, then it was 126 million.Footnote 41

Despite changing their unofficial motto, Zuckerberg’s company has continued to move fast and break things in the interests of growth and profits . The company’s most recent promise—to orient its platform around privacy —has been lauded by some and derided by others, who question how privacy can co-exist with Facebook’s business model. Some propose Zuckerberg’s pivot is yet another PR spin, or an attempt to enmesh Facebook’s services such that they cannot be dismantled by forthcoming antitrust laws. Like so many CEOs who purport to be leaders, Zuckerberg has underestimated the correlation between mature, socially responsible leadership, and the long-term success of his company.

The failure of leadership that plagues much of Silicon Valley rests on a fundamental misunderstanding of what leadership actually entails, and how to do it. Leadership author and expert Max De Pree describes the simple (but by no means easy) art of leadership as follows:

The first responsibility of a leader is to define reality. The last is to say thank you. In between the two, the leader must become a servant and a debtor. That sums up the progress of an artful leader…. The art of leadership requires us to think about the leader-as-steward in terms of relationships… of momentum and effectiveness, of civility and values.Footnote 42

Tech execs tend to excel at the first of De Pree’s standards: defining reality. Have you ever watched clips of Steve Jobs showing off the first iPhone , read excerpts from Tim Berners-Lee on reinventing the web, or heard Elon Musk paint a picture of a carbon-neutral future? It takes an exceptionally visionary and brilliant mind to get hundreds of thousands of people excited about solar panels and batteries, yet Musk repeatedly demonstrates the hugely effective and ambitious reality-setting skills that have made him the visionary leader of not one but multiple companies, including PayPal, Tesla, SpaceX, and the Boring Company. No one could level a complaint that tech execs lack vision—what they could perhaps stand to develop are the qualities required of successful leaders once they have defined their vision: self-awareness , emotional intelligence , and values that seek to address real-world problems.

One of the problems facing Silicon Valley founders is that the skills needed to be an effective entrepreneur are entirely different to those needed to be an effective leader of a multi-national corporation. Derek Lidow, author of Building on Bedrock and Startup Leadership, explains that the transition from one role to the other can be tricky when entrepreneurs fail to recognize and develop the qualities demanded by their new role as a business leader, which rest on an underlying capacity for self-knowledge. “To lead others, you must first lead yourself, and leading yourself requires that you must realistically understand your capabilities—both strengths and weaknesses.”Footnote 43 Lidow makes a compelling case for mastering the skills of self-awareness and relationship building, as well as the necessity of understanding one’s own motivations , in order to be an effective entrepreneurial leader.

Values

The lack of emotional intelligence in Silicon Valley is underscored by a scarcity of the type of values that would make the world a more equitable, safe, and sustainable place. A conversation has begun to emerge recently about the role of ethics in technology—how important they are, how we might go about defining ethical frameworks for tech products, and how to enforce and achieve them. It has become increasingly accepted that ethics are desperately needed in everything from computer science classrooms to leadership training .Footnote 44 Illah Nourbakhsh, a professor of robotics at Carnegie Mellon University, explains that engineers, “designers, computer scientists and CTOs all need to understand the ethical implications of” the technology they create if they are to effectively mitigate the negative impacts of their products and services.Footnote 45 A 2018 study published in Science similarly concluded that ethical frameworks were central to the development of future AI technology:

Artificial intelligence (AI) is not just a new technology that requires regulation. It is a powerful force that is reshaping daily practices, personal and professional interactions, and environments . For the well-being of humanity it is crucial that this power is used as a force of good. Ethics plays a key role in this process by ensuring that regulations of AI harness its potential while mitigating its risks.Footnote 46

A deeper awareness of ethical concerns within Silicon Valley would not only help direct technology in a more prosocial direction, but could mitigate many of the threats we currently face, such as job displacement , economic inequality , and election interference .

What conversations about ethics tend to miss is the role values play in informing ethical frameworks . (There is also a tendency to conflate the two, though they are importantly different). According to the Oxford English Dictionary, ethics are defined as “a set of moral principles, especially ones relating to or affirming a specified group, field, or form of conduct,” where values are “the regard that something is held to deserve; the importance, worth, or usefulness of something.” Where ethics and morality are systems—codes, principles, standards of conduct—values are our judgments of what is worthy or important in life. What we value informs our ethics ; without understanding what we value, it is impossible to advocate for any particular set of ethics that might meaningfully direct corporate behavior in one way or another. The primary ethical threat posed by Silicon Valley is that it is utterly unaware of its values.

In order to understand what we value, we need to understand what drives and motivates us. According to Bay Area psychotherapist Brooke Dougherty, our values are a facet of our psychology, in that how we are shaped informs what we come to value, which in turn affects what we believe and how we act. Not all values are virtuous, nor are they necessarily conscious, but they are all part of who we are. As outlined in Chap. 5, the primary motivation of the industry is profit , specifically, a kind of profit that values shareholder maximization above all else, the effects of which are economically unsustainable. The value that underlies this motivation is money. Other values core to Silicon Valley and its corporations more broadly include innovation , creativity , convenience , problem-solving , work ethic, growth, speed, and disruption. Individually, these are neither negative nor particularly problematic. Naturally an industry wants to grow, naturally it cares about profit . Taken together, however, they represent a troubling dynamic, in which the most influential industry in the world is organized around speed rather than reflection and planning, convenience over connection, and individualism above social good.

In addition to more openly discussing the stated values and practiced values of Silicon Valley , we might also pause to reassess our broader social and cultural values. The mores that govern technological development will ideally represent the needs and values of everyone who uses technology, rather than the small subset of those who design, deploy, and profit from it. To do this effectively, it is useful to understand what it is we place value on collectively, and how we would like to see the world progress . While I’m not a fan of fearmongering, this is a conversation we might want to sit down and have sooner rather than later. Professors Evan Selinger and Brett Frischmann remind us that if we fail to address “critical social policy questions… proactively while systems are being designed, built, and tested,” we run the risk that unhealthy values will become “entrenched as they’re embedded in the technology.”Footnote 47

Following a rather impressive string of missteps, breaches of public trust, and apology tours, can we reasonably trust the industry to regulate itself, create a system of ethics , and act in accordance with its stated values? I would argue we cannot. Fool us once, shame on you; fool us hundreds of times, still shame on you, but also, really, what the hell were we thinking letting you blatantly flout the law, ignore the needs of your users, and repeatedly break your promises, all while paying relatively no corporate tax and buying up all your competition ? Can big tech be trusted? If we are to base our response on the data associated with its patterns of behavior, the answer is no. This is not to say that the industry cannot change, merely that it needs some assistance to do so. What happens in the next five years will irreversibly affect what happens in the next fifty. Whether technology serves humanity in a positive way or continues to concentrate wealth in the hands of an elite few individuals, leave workers behind, and undermine democracy are all questions that will be answered in the next several years. Such problems are simply far too important to leave in the hands of the people who created them.

Why Tech Can’t Fix Itself

There are many reasons the tech industry is not in a position to remedy the problems it has brought about, several of which stand out as particularly problematic. First, there is a tendency among those in tech to address the flaws of their technology with more technology. Eugyny Morozov refers to this as technological solutionism, an ideology that imagines engineering better algorithms can effectively answer all problems, including those caused by engineering and algorithms. The second is that taking the steps necessary to truly fix many of the tech industry’s problems , particularly those perpetuated by the attention economy and advertising business model , is at odds with how most companies generate growth and revenue. The final complication of self-regulation is the problem of perpetuating the thinking engrained in tech and assuming those who got the industry into its current predicament can be entrusted to get it out.

Employing technology to fix technology is the kind of approach one might expect from an industry known for its insularity and a somewhat blinkered approach to problem-solving. The notion that more tech is the answer to bad tech is psychologically curious at best, irrational and self-serving at worst; and yet it happens constantly, not only within the tech industry, but throughout society. Our increased reliance on technical solutions is rooted in a cultural narrative that purports the boundless power of science and technology—we put a man on the moon; we put a communication device in the hands of nearly every human on the planet; we recently put a second case of HIV into remission; we made cars that can drive themselves. The reason the narrative exists is that, to a degree, it’s true. We have accomplished extraordinary things in the fields of science and technology, of which we should be exceedingly proud. The effect of these accomplishments, however, particularly as they stack up in greater numbers and at a dizzying pace, is the false assumption that science and technology can solve all our problems. Thanks to recent advancements in science, many of which we previously considered “unsolvable,” Yuval Harari explains that many people have come to believe all problems can be solved by the right application of science, engineering, or technology.Footnote 48 Technologists, in particular, have become fond of the idea “that science and technology hold the answers to all our problems,”Footnote 49 including those created by technology.

As convenient as that narrative would be, the truth is that not all problems can be coded away. How we relate to one another online should not simply be a matter of automatically flagging harmful content, but of setting and enforcing communication standards across all social platforms. Offering online education is not a commensurate solution to the elimination of whole sectors of middle-class jobs. Removing the Facebook pages of Russian-based propaganda organizations does not address the existential catastrophe of misinformation. Relying on code and algorithms to fix the problems caused by code and algorithms is a deeply flawed approach that misses the issue—and the irony—of trying to engineer away social, political, and human problems. Harari explains that while scientific knowledge has “led to astounding breakthroughs in astronomy, physics, medicine and multiple other disciplines,” it has one central drawback, in that science cannot “deal with questions of value and meaning.”Footnote 50 There is simply no purely technical solution to questions about how to handle wealth concentration , body shaming, or the proliferation of misinformation. These each require pluralistic moral discussions, not updated codes and algorithms.

Immature Silicon Valley organizations are famous for relying on data in order to make significant and sweeping decisions about policy , practice, and standards, seemingly operating under the belief that no problem is too big, complicated, or human to be solved with some combination of 1s and 0s. This misplaced confidence was at the heart of a 2018 controversy, in which YouTube came under fire for its practices around automatic content moderation. Jacob J. Hutt, a fellow at ACLU’s Speech, Privacy, and Technology Project, concluded that YouTube’s technical solution to what is essentially a human problem was insufficient at best, solutionist at worst.

YouTube’s new report, while an important step toward greater transparency, doesn’t resolve those concerns. First, while it assures that a human reviews content flagged by artificial intelligence , it neither describes the standards for this review process nor reveals how frequently human reviewers reject the machine’s initial flag. This is especially concerning for content flagged as “violent extremist content.” In the last quarter of 2017, a staggering 98 percent of content removed for reflecting violent extremism was flagged by machine, which raises the concern that YouTube may be relying almost exclusively on automated tools to flag content in the first instance.Footnote 51

Hutt continued,

YouTube’s transparency report raises other questions about the role of machine learning in content takedowns…. Under what circumstances does YouTube’s machine-learning algorithm automatically remove videos flagged as potentially inappropriate? And how many videos have been removed without a human ever having reviewed them? … If machines are learning from human decisions, how are the companies ensuring that the machines do not reproduce, or even exacerbate, human biases?Footnote 52

Hutt’s argument against over-engineering YouTube’s problem of violent and extremist content draws attention to a one-dimensional approach that tech companies often employ to police their platforms and rectify their misconduct. Implementing a technological solution may indeed be compulsory, but it should be both preceded and followed by a comprehensive evaluation and analysis of the factors contributing to the problem that could be solved with policy or human input.

This example illustrates not only the difficulty of self-regulation, but also the unlikelihood of prioritizing morally right alternatives over and above an organization’s economic interests. The vast majority of efforts to police social media platforms across a range of issues—including everything from instructions for self-harm and suicide, to Holocaust denial , white supremacy channels, and anti-Semitic content—typically amount to little more than a distraction. Professor and author John Naughton of the Open University in London has argued that the fundamental issue preventing platforms from acting responsibly “is that social media platforms cannot solve the societal problems they have created—because, ultimately, doing so will hurt their revenues and growth.”

This is the unpalatable truth they are all squirming to avoid. And in doing so they’re really just confirming HL Mencken’s observation about the impossibility of getting someone to understand a proposition if his income depends on not understanding it. It’s not that the companies don’t get it, just that they cannot afford to admit that they do.Footnote 53

Naughton cites YouTube’s misled attempt to mitigate conspiracy theory videos on its platform by showing factual information alongside them, which CEO Susan Wojcicki indicated would be sourced from Wikipedia . A conspiracy video about flat-earth theories, for example, might be paired with information from third-party sources about the moon landing or a space station. YouTube’s proposed technical solution to the cultural problem of misinformation leads Naughton to conclude one of two possibilities: either that Wojcicki and her colleagues do not understand conspiracy theories and the “current crisis of disinformation and computational propaganda ” on the internet or, that they understand both perfectly well, but are unwilling to admit the scale or severity of the problem if it means inhibiting the company’s growth or revenue.

Another well-documented instance of willful blindness is Facebook’s attempt to ignore the threat of bad actors on its platform. In 2011 and 2012, Sandy Parakilas led the team at Facebook tasked with overseeing policy and privacy issues for the site’s developer platform. Four years before Brexit and the U.S. election debacle, Parakilas warned Facebook’s executives of the risk of foreign interference on the platform.

[I]n mid-2012, I drew up a map of data vulnerabilities facing the company and its users. I included a list of bad actors who could abuse Facebook’s data for nefarious ends, and included foreign governments as one possible category. I shared the document with senior executives, but the company didn’t prioritize building features to solve the problem. As someone working on user protection, it was difficult to get any engineering resources assigned to build or even maintain critical features, while the growth and ads teams were showered with engineers. Those teams were working on the things the company cared about: getting more users and making more money.

Parakilas notes that he was not the only person to raise questions about misuse of the platform.

During the 2016 election, early Facebook investor Roger McNamee presented evidence of malicious activity on the company’s platform to both Mark Zuckerberg and Sheryl Sandberg. Again, the company did nothing. After the election it was also widely reported that fake news , much of it from Russia , had been a significant problem, and that Russian agents had been involved in various schemes to influence the outcome. Despite these warnings, it took at least six months after the election for anyone to investigate deeply enough to uncover Russian propaganda efforts, and ten months for the company to admit that half of the US population had seen propaganda on its platform designed to interfere in our democracy. That response is totally unacceptable given the level of risk to society.Footnote 54

Parakilas’s account illustrates the dilemma companies face when their financial priorities come into conflict with social responsibility . At some point in the industry’s past, a responsibility to users may have trumped financial incentives; today, however, values appear not only to have taken a backseat to profit , but have been relegated to a different vehicle entirely.

Some companies seem genuinely concerned with fighting the unintended impacts their products and services have contributed to (others, not so much). One proposal that has been floated and, in several cases, implemented has been the addition of Chief Ethics and Culture Officers, as well as ethical oversight boards . Shannon Vallor was recently appointed as a consulting ethicist at Google Cloud; in 2018, Uber hired a Chief Compliance and Ethics Officer, Scott Schools; and, in late 2018, Salesforce hired Paula Goldman as its first Chief Ethics and Humane Use Officer. Microsoft set up an internal ethics board in 2016, as did Google, in order to oversee its AI branch, Deepmind (very little is known about the current state of the Deepmind oversight committee). A separate group, the Advanced Technology External Advisory Council, which was launched in 2019 to oversee Google’s AI efforts more broadly, was shut down after less than two weeks. Such appointments and initiatives are a step in the right direction and any company making an attempt to improve compliance and ethics should be applauded for their effort. Anna Lauren Hoffman suggests, however, the well-meaning act of establishing these positions will never sufficiently address the complex moral issues tech companies face.

[O]ne individual (or team or council or department) is not a panacea for all possible ethical problems…. The solution is not to corporatize ethics internally—it’s to bring greater external pressure and accountability. Rather than position the problem as one of “bringing” ethics to companies like Facebook via a high-powered, executive hire, we should position it as challenging the structures that prevent already existing collaborations and ethically sound ideas from having a transformative effect.Footnote 55

The greatest ethicist on earth, or a board of the smartest and most well-meaning people, would ultimately do very little to combat the tsunami of ethical issues tech companies face. One voice, or a handful of voices, particularly when they operate internally, will not be able to change the moral direction of companies like Facebook and Google if those voices are at odds with the financial interests of the company.

A final problem that precludes the tech industry’s ability to effectively police itself are the behavioral qualities and characteristics that dominate the tech landscape, which collectively make it extremely unlikely Silicon Valley would prove capable of course-correcting on its own. Journalist Stephen Johnson cites the Audre Lorde maxim that “the master’s tools will never dismantle the master’s house,” noting that we will not fix the problems of technology with the same thinking that created them. Instead, Johnson suggests, we will “need forces outside the domain of software and servers to break up cartels with this much power.”Footnote 56 Vivek Wadhwa, a professor at Carnegie Mellon’s School of Engineering and author of Your Happiness Was Hacked, argues that successfully “tackling today’s biggest social and technological challenges requires the ability to think critically about their human context,”Footnote 57 rather than simply engineer solutions.

Experts have suggested we might look to philosophers, ethicists, and academics in the humanities to help supplement and rebalance the tech industry’s ethics and copious errors in judgment. AI safety researchers Geoffrey Irving and Amanda Askell at OpenAI argue that the act of aligning technology with human values will be paramount in ensuring future technologies serve rather than undermine human progress . Meeting this need, and resolving the “many uncertainties related to the psychology of human rationality, emotion, and biases” embedded in tech’s products and services, they explain, will require extensive and enduring collaborations between social scientists and technologists.Footnote 58 Richard Freed, author of Wired Child, suggests that psychologists, in particular, will be uniquely positioned to understand human nature, ethics, and the longer-term implications of the industry’s practices.Footnote 59

Power to the People

As we begin to re-envision a future unmarred by the corrupting influences of targeted advertising , technocapitalism , and outdated values, it’s worth mentioning—clearly and unequivocally—that we can. The power of a few billionaires is nothing compared to the power of billions of people, and the idea that a handful of obscenely rich men control the future is both laughable and patently false. They know this, and so do we. The system as it stands is unsustainable and will soon change; the only question that remains is what shape that will take and the methods by which it will occur. The first line of defense in guarding the future against the often morally questionable behaviors of the tech industry is the very people upon whose data it has built its fortune. Recognizing the power we hold as consumers and the ways in which we can stand up to the unprincipled behaviors that emerge from Silicon Valley is our most immediate source of influence.

When companies promote misinformation, disregard privacy , and neglect mental health , it is our responsibility to express our disapproval, not only in principle, but also in practice. Every time we visit a website, platform, or app , we are communicating to the executives and stockholders of that company that its services are a valuable use of our time. John Montgomery, Executive Vice President for brand safety at GroupM, and Brian Wieser, a media analyst at Pivotal Research, explain that the number one means of immobilizing companies like Facebook is to diminish their user base.Footnote 60 The number one way to do that is to delete, deactivate, or simply not use services like Facebook until they meet certain ethical standards. As Taipei-based tech writer and former Apple and Microsoft engineer Ben Thompson has argued, the best place to look for weakness in any tech company “is not in the supplier base or distribution or even regulation: it is with the end users.”Footnote 61 When we continue to engage with companies who have abused our trust, we condone the mishandling of private information, disruption of our democracy, and knowing assault on our wellbeing .

In a 2017 talk at Stanford’s Graduate School of Business, former Facebook executive Chamath Palihapitiya discussed the significance of using data-driven social media platforms like Facebook.

If you feed the beast, that beast will destroy you; if you push back on it, we have a chance to control it and rein it in… it is a point in time where people need to hard break from some of these tools… The things that you rely on the short-term—dopamine driven feedback loops that we have created—are destroying how society works. No civil discourse, no cooperation, misinformation, mis-truth. And it’s not an American problem—this is not about Russian ads—this is a global problem…. You don’t realize it, but you are being programmed. It was unintentional, but now you got to decide how much you’re willing to give up, how much of your intellectual independence [you are willing to sacrifice].Footnote 62

If you live in the U.S. or Europe , your decision to disengage from companies whose behaviors or business practices you object to holds more weight in terms of the advertising dollars your Western eyeballs generate.

Part of the business concern over the current scandal is that Facebook would lose its most valuable users if there’s an exodus of Western users. The global average revenue per user is around $6 per quarter, but for users based in North America, it’s nearly $27 per quarter. In the developing world, where many of Facebook’s newer users are found, Facebook generates significantly less revenue: Outside of Europe , Asia, and North America, the average revenue per user is just $2 per quarter.Footnote 63

Until such time when governments are able to hold tech companies to account for their actions, it is up to users to say what they will and will not stand for. To consciously use platforms and products whose behaviors and impacts are aligned with our values is the least we can do to ensure that, as they build their presence across the globe, companies learn from their mistakes and recognize they cannot sacrifice ethics without also sacrificing their user base.

Agents of Change

A second group that wields immense power and has the capacity to shift the direction of the industry’s values is its own workforce . Employees at top tech companies have increasingly vocalized their concerns, disappointment, and even outrage at the morally questionable actions of their employers, which has led, in many cases, to measurable and immediate change. In an article titled “Inside Google’s Civil War,” journalist Beth Kowitt observes that “[n]o one is closer to tech’s growing might, as well as its ethical quandaries, than the employees who help create it.”Footnote 64 Kowitt’s thoughtful and revealing article explores a growing defiance among Silicon Valley employees who refuse to be complicit or sit idly by while their company engages in morally questionable behavior, ranging from sexual harassment to workers’ rights to projects that threaten democracy and human rights .

One key area of discontent among workers has centered on the treatment and working conditions of tech staff themselves. Exacerbated by the disintegration of unions, there have been resounding calls for change from employees at companies such as Amazon , Uber, and Tesla, which have drawn attention to everything from working conditions and safety concerns, to transparency and fair pay. A spate of employee complaints against Amazon , for example, garnered international media attention, a flurry of undercover reporting and investigations, and calls from top public officials to increase pay to a living wage .

Employees have also been increasingly outspoken about the morally questionable uses of the products their companies design, the projects in which they involve themselves, and the broader ethical decisions executives make. In 2018, Microsoft employees protested the company’s $19.4 million contract with the U.S. Immigration, Customs and Enforcement Agency (ICE), who was using the company’s deep learning facial recognition and identification software to detain individuals at the U.S. border. The U.S. government’s increased reliance on ICE detention centers and the inhumane treatment of migrants in custody has resulted in calls for reform and increased oversight of the 200-plus detention centers across the country. In the past two years alone (2016–2018), 22 immigrants have died in ICE custody.Footnote 65 The letter from employees to Microsoft’s CEO Satya Nadella openly questioned Microsoft’s involvement with ICE and the decisions to put company profits over human rights .

We believe that Microsoft must take an ethical stand, and put children and families above profits . Therefore, we ask that Microsoft cancel its contracts with US Immigration and Customs Enforcement (ICE) immediately, including contracts with clients who support ICE. We also call on Microsoft to draft, publicize and enforce a clear policy stating that neither Microsoft nor its contractors will work with clients who violate international human rights law. We were dismayed to learn that Microsoft has a standing $19.4M contract with ICE. In a clear abdication of ethical responsibility , Microsoft went as far as boasting that its services “support the core [ICE] agency functions” and enable ICE agents to “process data on edge devices” and “utilize deep learning capabilities to accelerate facial recognition and identification.” These are powerful capabilities, in the hands of an agency that has shown repeated willingness to enact inhumane and cruel policies . In response to questions, Brad Smith published a statement saying that Microsoft is “not aware of Azure products or services being used for the purpose of separating families.” This does not go far enough. We are providing the technical undergirding in support of an agency that is actively enforcing this inhumane policy . We request that Microsoft cancel its contracts with ICE, and with other clients who directly enable ICE. As the people who build the technologies that Microsoft profits from, we refuse to be complicit. We are part of a growing movement, comprised of many across the industry who recognize the grave responsibility that those creating powerful technology have to ensure what they build is used for good, and not for harm.Footnote 66

The letter ends with a request that the company cancel the existing government contract immediately, draft a policy stating that Microsoft will not be affiliated “with clients who violate international human rights law,” and commit to transparency between any contracts the company enters into with foreign or domestic governments. In June 2018, Amazon CEO Jeff Bezos received similar requests from shareholders, consumers, and over 40 advocacy groups in regard to the use of Amazon’s facial recognition software, Rekognition. Critics of the contract, such as the ACLU, called the product “perhaps the most dangerous surveillance technology ever developed,”Footnote 67 while others expressed their fear that the software, which has been marketed to police and government offices as a surveillance tool, could be used to disproportionately target immigrants and people of color.Footnote 68

When it comes to the ethical trajectory of tech companies, Google employees have been some of the most vocal. Protests , public letters, and leaked memos have attracted considerable attention as employees demand explanation, transparency, and change, both in regard to internal behaviors and corporate projects. One of the most controversial projects at the company is Project Maven , a contract with the U.S. Department of Defense that used Google’s artificial intelligence for “algorithmic warfare” to improve drone targeting.Footnote 69 Once employees became aware of the contract, over 3,000 staff signed a letter to CEO Sundar Pichai expressing disapproval of the project and demanding the contract be cancelled. The letter highlights both the potential for reputational damage and the discrepancy between Google’s actions and its stated values.

We cannot outsource the moral responsibility of our technologies to third parties. Google’s stated values make this clear: Every one of our users is trusting us. Never jeopardize that. Ever. This contract puts Google’s reputation at risk and stands in direct opposition to our core values. Building this technology to assist the US Government in military surveillance—and potentially lethal outcomes—is not acceptable. Recognizing Google’s moral and ethical responsibility , and the threat to Google’s reputation, we request that you: 1. Cancel this project immediately 2. Draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.Footnote 70

The following month, the International Committee for Robot Arms Control sent a follow-up letter signed by academics and scholars, including founder Larry Page’s PhD advisor Terry Winograd, in support of ending Project Maven .Footnote 71 By June, Pichai announced that Google would not renew the contract when it expired, but made clear it would continue to work “with governments and the military in many other areas.”Footnote 72

The trend of tech inserting itself into defense projects is an uncomfortable turn for many employees, who signed up to work at companies for a multitude of reasons that likely did not include improving surveillance systems or the “lethality” and “readiness” of war tools.Footnote 73 Even for those who do not work directly on the projects in question, journalists Scott Shane and Daisuke Wakabayashi point out that the budding Silicon Valley-Department of Defense relationship “underscor[es] the difficulty of separating software, cloud and related services from the actual business of war.”Footnote 74 For Google employees, in particular, who joined a company that explicitly claimed to not be evil, reconciling these PR promises with the company’s actions—as well as employees’ individual ethics—can become a difficult moral situation that leaves many disillusioned with their organization’s priorities.

When the tension between personal and corporate ethics is felt to be too incompatible, resignation is a common form of escape. While some high-profile exits are shrouded behind PR stories of new ventures or of execs getting back to their coding roots, others are more conspicuous. Former Facebook CSO Alex Stamos , who clashed with Mark Zuckerberg and Sheryl Sandberg over Russian interference on the platform, left the company following the Cambridge Analytica scandal to work as a professor at Stanford. Whatsapp founders Jan Koum and Brian Acton, both critics of digital advertising, also left Facebook over a difference of opinion about encryption and ads, sacrificing stock worth $1.3 billion.Footnote 75 The founders of Instagram , Kevin Systrom and Mike Krieger, recently left Facebook as well, as did Chief Product Officer Chris Cox and Whatsapp Vice President Chris Daniels. While the specific reasons for departures vary, the commonality for many who choose to leave appears to be an inability to work for a company that centralizes power and compromises morality for profit .

In early 2019, site reliability engineer Liz Fong-Jones quit her job at Google, citing patterns of behavior that she believed impinged on diversity, human rights , and equality. During her 11 years at the company, Fong-Jones stood up to Google’s management on a number of issues she believed the company was getting wrong, including growth hacking, harassment , and Google’s work in China . Early in her career, Fong-Jones was instrumental in the decision to overturn a policy that required people share their real name on Google+, which she and others recognized was a risk to vulnerable users such as teachers, therapists, and members of the LGBTQ community who might need anonymity for safety reasons. Though Fong-Jones and her colleagues eventually prevailed, subsequent attempts to change the culture proved “less effective as leadership repeatedly stonewall[ed] employees who privately raise[d] concerns.”Footnote 76 After over a decade at the company, Fong-Jones resigned, saying she wanted to devote her career to “creating a more just world rather than exacerbating inequalities” and would be moving to a company with “a more diverse and fair working environment and a firm commitment to ethical computing.”

Central to this decision was Fong-Jones’s concern about the priorities and decisions at Google, particularly those related to the strategic and moral directions of the company.

I have grave concerns about how strategic decisions are made at Google today, and who is missing a seat at the bargaining table. Google bears the responsibility of being one of the most influential companies in the world, but it has misused its power to place profits above the well-being of people. Executives seem to have forgotten the ethos of the company’s earliest employees — “don’t be evil” — and ethical stances, such as pulling out of China over censorship concerns in 2010, have been supplanted by shadowy efforts to appease the country’s government at the expense of human rights .Footnote 77

Fong-Jones’s article covers some of the most disturbing incidents at the company during her tenure, including Google’s failure “to implement an ethics review process for government contracts that would automate surveillance and targeting of civilians in the Middle East” and the company’s foray into the Chinese search market. (Although Google abandoned plans to move into the Chinese market in 2010 due to concerns over censorship and security, it is again rumored to be building a censored version of its search engine for China , nicknamed Project Dragonfly , which would reportedly block any information related to democracy, human rights , religion, and peaceful protests .) In addition to their work in China and the Middle East, Fong-Jones also cites a breakdown of internal dialogue and a sharp increase in internal harassment of the company’s most marginalized and vulnerable employees, which began as “trolling and rapidly escalated to leaks of the names, photos, and posts of LGBT+ employees to white supremacist sites.” When employees complained or raised concerns, Fong-Jones explains, they were “ignored, stonewalled, or even punished for doing so.”

The discriminatory issues Fong-Jones raises have also come to light in public demonstrations and lawsuits that highlight Google’s tolerance of harassment . In 2018, the New York Times published an editorial detailing how the company had protected multiple men accused of sexual misconduct , including Andy Rubin, creator of the company’s Android mobile software. Rubin was reportedly asked to resign in 2014 after multiple allegations of misconduct against him had been filed; when he finally left, he was given an exit package of $90 million. In 2016, Google paid Amit Singhal upwards of $45 million when he resigned after accusations surfaced that he had groped a fellow employee.Footnote 78 Following the revelations of sexual misconduct and multi-million dollar exit packages, James Martin, one of Google’s shareholders, filed a lawsuit in early 2019, charging the company with “breach of fiduciary duty, unjust enrichment, abuse of power, and corporate waste.”Footnote 79 Fong-Jones said the payouts “utterly shattered employees’ trust and goodwill in management” and subsequently led over 20,000 Google employees (about a fifth of its workforce ) to walk out in protest in late 2018.

Employees had been complaining about pay inequity, mistreatment of contractors , and other forms of discrimination for years. To see how the company handled an executive harassment case revealed the utter lack of scruples among management. Employees walked out en masse, holding signs reading: “I reported, he got promoted,” and “Will leave for $90M, no harassment needed.” More than 20 percent of full-time employees joined the protest along with a large number of contractors who faced even greater risks of retaliation from their superiors.

Dr. Cameron Sepah has argued a company’s culture is defined by whom it hires, fires, and promotes.Footnote 80 By offering excessive payouts to those accused of discrimination and harassment , companies may, perhaps unintentionally, send a culturally confusing message to their staff that such behavior is not only tolerated, but also financially rewarded.

In addition to protests and employee-led accountability movements, Silicon Valley has also seen the rise of tech humanism , led by the Center for Humane Technology and its many allies. The group, which is comprised primarily of former industry employees, has taken on the design mistakes and ethical transgressions of the industry. In a 2018 article, Ben Tarnoff and Moira Weigel describe the movement’s focus on addressing the social problems that have arisen from unethical technology design, which include distraction , disconnection, mental health , and the erosion of information and democracy. Tarnoff and Weigel report that, like other employee-led movements, Silicon Valley has taken notice of the charges leveled against them by tech humanists , noting that industry leaders “are starting to speak its idiom.” Snap CEO Evan Spiegel has “warned about social media’s role in encouraging ‘mindless scrambles for friends or unworthy distractions,’” Twitter’s Jack Dorsey “recently claimed he wants to improve the platform’s ‘conversational health,’” and Mark Zuckerberg co-opted the Center for Humane Technology’s language that engaging with digital devices should be “time well spent.”Footnote 81

As tech companies continue to test the waters (and profitability) of veering into the muddy territory of human rights violations, surveillance, and war, scrutiny from employees at all levels will be vital to help hold them to account. Thankfully, there appears to be a healthy scepticism within Silicon Valley’s workforce that continues to grapple with and, when necessary, actively resist the morally questionable corporate decisions and priorities of their employers.

Winter Regulation is Coming

I’ve never been a big fan of rules. That said, I appreciate the ones that serve an obvious, constructive purpose, hold society together, and generally keep us from doing vile things to each other. Rules become particularly useful, I find, when a given situation cannot be controlled by those involved in or responsible for its outcome. Such is the case in Silicon Valley, where an inability to self-regulate or maintain acceptable ethical standards have ensured that, like it or not, regulation is coming for the tech industry.

If social responsibility includes both consumers and employees standing up and demanding better from big tech, regulation sits squarely in the government’s realm of responsibility . Which might worry anyone who saw the 2018 congressional hearings with Facebook , Twitter , and Google execs, in which a number of elected officials displayed a concerning lack of awareness about the basic ins and outs of platform governance, security, and the implications of an advertising-centered business model . The first regulatory problem, according to Devon Maloney, is not a matter of regulation at all, but a basic assumption that elected officials should understand the implications and issues associated with big tech.

Within a few decades, our elected officials will all be from a generation that understands a lot more about technology than this one. Whether those representatives will understand the ins and outs of our digital world remains to be seen; it’s possible many of them will remain willfully in the dark. But wouldn’t you rather vote for someone who took the time to understand the threats to their constituents’ well-being, and to democracy itself, however complicated those threats may be?Footnote 82

The complexity and range of the issues Maloney refers to, which include changes to employment , the economy , health , cognition, security, existential threats , privacy , and human rights , will shape our future, for better or worse. Journalist Amy Zegart and U.S. Air Force Lieutenant Colonel Kevin Childs have deemed closing the government-tech divide a “national-security imperative” and have argued that the gulf between the two could prove catastrophic across a number of ethical and security fronts.Footnote 83 Should we fail to elect politicians who understand these problems, and who are willing to proactively address them and envision intelligent solutions, we risk allowing legislative officials into office who are out of touch with some of the most urgent problems in our world.

Tim Berners-Lee’s internet began as and has continued to be borderless, without relevant social or legal frameworks to direct our behaviour. The speed at which the tech industry has grown has allowed it to remain largely lawless and get ahead of any regulation that may have meaningfully addressed some of its more nefarious actions. The pace of the industry, combined with the myth of the well-meaning, prosocial company out to save the world, has repeatedly allowed tech giants to evade regulation despite a growing number of offences. Facebook still contends, for example, that it is a platform and not a media company, which protects Facebook from taking responsibility for the content on Facebook. For years, big tech was able to convince both an adoring public and (a largely digitally confused) government that their interests were different from other for-profit corporations. Roger McNamee explains that,

[t]hanks to the U.S. government’s laissez-faire approach to regulation, the internet platforms were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions. No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil. No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.Footnote 84

McNamee , a mentor of Zuckerberg and an early investor in Facebook, contends that companies like “Facebook and Google are now so large that traditional tools of regulation may no longer be effective,” citing a lack of relevant legal frameworks and fines commensurate with the scale of abuse.Footnote 85 McNamee suggests that any lasting and effective change must be the product of both a shift in the approach and strategies of legislation .

Like any comprehensive and successful change program, the legal arm of responsibility will be a cocktail of both reactive and proactive approaches, including investigations , legislation , and frameworks that address the liability, abuse, and responsibility of tech companies and their leadership. Investigations are, by their nature, reactive, and offer a means of systemic inquiry into actions that may have breached existing laws or standards of conduct. There are simply too many current and past investigations into the conduct of big tech corporations to take inventory; such a list would make our heads spin and put me well over my allotted word count. It is worth briefly delving into the types of lawsuits and investigations that have been brought against big tech, as well as where they originated and how they might inform future policy decisions. Some of the most recent and significant instances include:

  • A US lawsuit filed against Google for illegally tracking its customers’ movements, even when users had enabled a privacy setting to prevent tracking.Footnote 86

  • A class-action lawsuit against Facebook for logging users’ text messages and phone calls without their consent.Footnote 87

  • A UK investigation , stemming from the Cambridge Analytica scandal, into the use of data analytics in political campaigns, which found that Facebook had breached the Data Protection Act, as users were not made aware that their data could be utilized and shared with political parties. This resulted in a £500,000 fine against the company, the maximum allowed for violating the 1998 Data Protection Act.

  • Google is currently being sued for £3.2bn in the UK for tracking and collating the personal information of 4.4 million iPhone users illegally.

  • In 2012, Google was fined $22.5m in the US by the FTC for similar practices around user data.

  • In 2019, EU regulators fined Google €1.5bn for blocking rival advertisers and stifling competition .

  • The EU fined Facebook £94m for providing misleading information over its technical capabilities in terms of sharing user data prior to its acquisition of Whatsapp in 2014.

  • And speaking of Whatsapp , in 2016, the EU asked the company to stop sharing data with its parent company, Facebook. In 2017, the latter was fined €3 million by Italy’s Antitrust Regulator, AGCM. In 2018, the U.K.’s Information Commissioner’s Office determined that WhatsApp had “not identified a lawful basis of processing for any such sharing of personal data ” and that “if they had shared the data, they would have been in contravention of the first and second data protection principles of the Data Protection Act.”Footnote 88

  • Google was fined a record €4.34bn—the largest ever handed down by the European Commission—for anti-competitive practices that included abusing its dominance on Android products and squashing competition .Footnote 89

  • The Federal Trade Commission is expected to levy an approximately $5 billion dollar fine against Facebook for violating user privacy , which will be the largest ever issued against a tech company by the FTC .Footnote 90 The FTC is also considering whether to hold Zuckerberg personally accountable for the company’s privacy failures.

Several conclusions can be drawn from the above information: first, with the exception of the European Commission’s historic €4.34bn fine, the financial punishments against big tech are not commensurate with the scale and illegality of their actions. A fine of £500,000 to atone for the chaos and global political ramifications of the Cambridge Analytica scandal is preposterous and in no way serves as a deterrent for a company like Facebook, which takes in the same amount in revenue every five-and-a-half minutes.Footnote 91 Both Brian Barrett and McNamee have argued that retroactive fines, while well-intentioned, are simply ineffective.Footnote 92 Footnote 93 New laws, steeper fines, and harsher punishments are rumored to be on the horizon as legislators appear poised to take on the behaviors of big tech . In the weeks following the live-streaming of the Christchurch massacre on Facebook, for example, the EU approved a proposal to impose a 4% fine of total global turnover on tech companies who fail to remove terrorist content on their platforms within one hour.

A second lesson we can take away from the number, scale, and financial penalties of the above investigations are the vast differences between the countries who levy them. Both the EU and individual European countries have implemented far more aggressive regulation than the United States. In Germany , for example, legislators have little tolerance for propaganda , consumer privacy violations, and assaults on democratic processes, and thus have some of the strongest local regulations around hate speech and misinformation. Once implemented, Germany’s standards led to a 100% increase in Facebook’s performance of removing hate speech .Footnote 94 Other countries, such as Finland , rely on a “strong public education system and a coordinated government response… to stave off Russia’s propaganda .”Footnote 95 The U.S., by comparison, has more lenient laws when it comes to policing tech giants, including section 230 of the Communications Decency Act , which protects platforms from being liable for the content on their site.Footnote 96 (Nearly all experts in the U.S. agree removing or amending section 230 of the Communications Decency Act is necessary to ensure platforms bear some form of responsibility for what occurs on their platforms.) Barrett suggests that efforts in the EU and Germany “offer something like an outline, if not an outright blueprint” for the U.S. as it moves to increase legislative action.Footnote 97

A final inference we can draw from the above fines and investigations is that the current laws governing the business practices of tech corporations are not fit for purpose. Barrett points out that while the “FTC has a modicum of authority, and has used it when companies grossly overreach—as it did against Facebook in 2011, when the company failed to keep its promises regarding how it treated their data,” the agency “can only work with the legislative tools it’s given.”Footnote 98 Based on previous and ongoing investigations , it stands to reason that new, more specific laws are necessary, specifically around data privacy , advertising, hate speech , harassment , and anti-competitive practices. Forward-thinking lawmakers should also consider policies and mitigating strategies to combat developing problems such as misinformation , corporate transparency, and ethical standard for developing AI .

New Laws: Coming Soon to a Platform Near You

While investigations have been fairly plentiful, new laws and policies limiting the power and conduct of tech giants have been stagnant, particularly in the U.S. Historically, Scott Galloway points out that Americans tend to have an aversion to regulation .Footnote 99 When it comes to the tech industry, however, there appears to be a growing appetite among Americans for some semblance of law and order. Olivia Solon reports that 83% of people polled in the Tech Media Telecom Pulse Survey support more penalties and laws around data privacy , while 84% say they believe tech companies “should be legally responsible for the content they carry on their systems.”Footnote 100 Because Silicon Valley companies have failed so spectacularly at self-regulation , Brian Barrett notes “regulation seems not only plausible but imminent,” in order to combat the growing number of data breaches and repeated moral lapses from “all corners of Silicon Valley.”Footnote 101

In the coming decades, a range of new laws, policies and frameworks will be needed; which issues we prioritize and how we go about drafting and enforcing regulations , however, are yet to be determined. According to Paul Laudicina, chairman of the Global Business Policy Council, forthcoming laws and policies will center around issues of digital content, user privacy , and antitrust legislation .Footnote 102 Stanford PhD candidate Melody Guan has argued that a natural place to start is with data privacy , owing to the fact that while big tech prolifically abuses its vast troves of user data, little has been done to combat the harvesting and monetization of that information, particularly in the U.S.

The poor regard for personal protection and rights in the current unregulated state of affairs shows us that we cannot simply rely on the goodwill of tech companies. Indeed, the nature of corporations themselves may expose them to lawsuits if they fail to prioritize the interests of their shareholders over debatable moral concerns. We need a citizen-centric government to shepherd the ethical and fair use of technology.Footnote 103

In 2018, the EU passed the General Data Protection Regulation (GDPR), which regulates the collection, storage, and use of personal data throughout the 28 member states of the EU . A 2019 report by the U.K.’s Information Commissioner’s Office (ICO) suggested that a significant portion of information used for targeted advertising relies on sensitive data, or “special category data,” much of which is collected and used without consent. The report suggests that, at least within the EU , less “mature” segments of the adtech industry may be in violation of various elements of GDPR , which prohibits profiling users without consent and requires data to be collected transparently, stored securely, utilized for a lawful basis.Footnote 104

In 2020, California will implement the most comprehensive data privacy law in the U.S., which is modeled to resemble GPDR. Though similar laws have been scarce as of yet in the U.S., some meaningful policies have been implemented, including various cybersecurity bills, revenge porn prevention laws, and the Honest Ads Act , which aims to regulate U.S. political advertising online, similar to how political ads must adhere to specific rules on TV, radio, and in print media .

A second likely area of policy development is antitrust reform; a third is more sensible taxation. Questions around fair competition in tech have already begun to drop, like big fat legislative bombs, onto the likes of Google, Facebook, and Amazon . While Zuckerberg has carefully avoided questions about Facebook’s status as a monopoly and has, thus far, avoided legislative action, in 2019, the U.S. Justice Department reportedly began preparing an antitrust investigation of Google,Footnote 105 while the FTC is said to have increased its anticompetitive oversight of Amazon .Footnote 106 In 2017, the EU ruled Google had abused its powers by “unfairly favouring its own services and products over others.”Footnote 107 In an article for MIT Technology Review, Mariana Mazzucato explains that the difficulty of regulating tech companies as monopolies comes back to a perception that the industry is somehow distinct from other corporations, which has allowed tech companies, in particular those providing free services, to sidestep questions about competition and consumer harm.

Historically, industries naturally prone to monopoly—like railways and water—have been heavily regulated to protect the public against abuses of corporate power such as price gouging. But monopolistic online platforms remain largely unregulated, which means the firms that are first to establish market control can reap extraordinary rewards.Footnote 108

Central to the question of how to impose anti-trust regulation on “free” services is the historical association of antitrust with price setting. Silicon Valley Congressman Ro Khanna has suggested that a new understanding of digital monopolies and antitrust legislation must be adopted which frames the antitrust argument in terms of the broader impacts of monopolies. Khanna has suggested this might include the suppression of innovation , a more nuanced definition of customer harm, and the effects of tech monopolies on wages and job loss.Footnote 109 The suppression on innovation can be seen clearly in the acquisition patterns of big tech companies. Between 2007 and 2019, Google acquired over 270 companies, 171 of which were competitive acquisitions. In the same timeframe, Facebook acquired 92 companies, 46 of which were competitors, almost all of which were purchased and then immediately shut down.Footnote 110

As tech corporations operate globally, so too does their money flow freely around the world, ending up increasingly in places like Ireland, which has an extremely low corporate tax, and Bermuda, which has a corporate tax rate of zero. BBC reports that the latter is where Google keeps all of its non-US generated profits . Apple, too, keeps “their profits in the parts of the world that charge the least—if any—tax.”Footnote 111 Even in the U.S., where most tech companies are based, corporate tax rates can leave the average person both incensed and confused. For the second year in a row in 2018, Amazon paid zero federal taxes in the U.S., despite being valued at close to a trillion dollars and generating profits of $5.6 billion and $11.2 billion in 2017 and 2018, respectively.Footnote 112 The Institute on Taxation and Economic Policy reports that Netflix also saw its largest ever profits in 2018, in excess of $800 million, and similarly paid no federal income tax in the U.S. Mariana Mazzucato has argued that the low tax rates tech companies enjoy are “perverse,” particularly “given that their success was built on technologies funded and developed by high-risk public investments: if anything, companies that owe their fortunes to taxpayer-funded investments should be repaying the taxpayer, not seeking tax breaks.”Footnote 113

Regulating tech companies begins with a better understanding of their business model, social impacts, and corresponding responsibilities . Jessi Hempel has observed that because new businesses “powered by the rise of the internet…. operate differently from those in more traditional industries, they must be regulated differently.”Footnote 114 Congress and lawmakers, however, have not sufficiently understood the impacts of the tech industry thoroughly enough to effectively regulate them. This trend appears, thankfully, to be changing, as Democratic presidential candidates such as Elizabeth Warren and Amy Klobuchar draw attention to the need for regulation of big tech . Ensuring all members involved in policy decisions are educated about the regulatory differences and impending impacts of the digital economy is paramount to ensuring these are formulated in a way that benefits society at large and addresses both the short- and long-term impacts of technology.

[We] need the government to assume its rightful role in protecting personal privacy and rights in the AI era. The government needs to step in and use its resources and powers of legislation and coordination to provide the structure for industry and research to develop and utilize AI without compromising civil rights and liberties, and do so soon. What is at issue is unprecedented assault to personal data and behavior; what is at stake is personal safety, privacy , dignity, autonomy, and democracy.Footnote 115

Relevant and meaningful legislation will ultimately be the result of more awareness, knowledge , and wisdom —both on the part of consumers and lawmakers—alongside “smart, well-designed technology”Footnote 116 on the part of tech companies.

You Say You Want a Revolution

The emergence of “smart, well-designed technology” will depend on the simultaneous convergence of several crucial changes from within the tech community . These improvements—which include increased awareness, better values, and emotionally intelligent leadership—will challenge the core psychology, and with it the normalized behaviors, of many of the tech industry’s most prominent organizations. While these changes are relatively straightforward, they are by no means simple. Changing the culture of an organization can take many years, changing the culture of an entire industry is infinitely more difficult . Working to reform the values and psychological norms of the tech industry, however, will ultimately provide the most comprehensive mitigation of Silicon Valley’s most pressing problems. Unless the social values and collective psychology of the tech industry changes at a systemic level, the institutions and products it produces will not.