The Human Meta-Organism

  • Mario Alemi
Open Access
Part of the SpringerBriefs in Computer Science book series (BRIEFSCOMPUTER)


This chapter introduces and analyses the human meta-organism. It shows how the introduction of communication through electric signals by Homo sapiens pushed ahead the cognitive abilities of our species: not the ones of the individual, but the ones of human society as a whole, which, with the introduction of Internet, is starting to look more and more like a complex organism itself.

The aggregation of cells into complex organisms is considered a new form of life, although it’s really just the result of the aggregation of other lifeforms. This makes sense though: when cells reached their cognitive limit, by connecting, intelligence continued to evolve. The cognitive capabilities of complex organisms have little to do with the ones of a single cell.

We’ve also seen that the above development occurs seamlessly: we go from the distributed collaboration of bacteria up to complex organisms.

Bacterial colonies aren’t really organisms: they don’t appear to have any self-awareness, they cannot clearly distinguish between “themselves” and the external environment. An organism, on the contrary, knows which elements are part of the system (and therefore collaborate in the cognitive processes, and must receive energy) and which aren’t (and can therefore be sources of energy).

Networks of complex organisms –societies– however, are more similar to bacterial colonies than to a new organism. The only possible exception, colonies of insects, are somewhere between the two: they are formed by extremely simple complex organisms, like ants, whose network creates a superorganism, like the ant colony. But the colony’s cognitive abilities are well below the ones of many complex organisms.

In this sense we say that the appearance of complex organisms is a revolution in the evolution of life. Complex organisms went on developing cognitive abilities that no single cell can have. Networks of complex organisms, like ants or social mammals and birds, on the contrary did not show such a leap in intelligence.

The aim of this chapter is to show how the introduction of communication through electric signals by Homo sapiens pushed the cognitive abilities of our species: not the ones of the individual, but the ones of human society as a whole, which, with the introduction of Internet, is starting to look more and more like a complex organism made of complex organisms.

This, in geological terms, is indeed a historic moment: the emergence of the first meta-organism of complex organisms, the human meta-organism. A new milestone in the evolution of life.

The Evolution of Communication in Homo sapiens

Hippocrates of Kos’ quote “Ὁ βίος βραχύς,δτέχνη μακρή” was translated into Latin as Vita brevis, ars longa. The word τέχνη (téchne), from which the term technology comes, derives from the Proto-Indo-European root ∗teks-, initially “to construct putting together” and then “to weave”, from which the Italian word tela (cloth) derives. The origin of the Latin ars is similar, and derives from the Proto-Indo-European haer-, “prepare, put together” (Mallory, Adams 1997). Taking liberties with the quote one might paraphrase it as “the individual dies young, the network lives long”.

Hippocrates’ quote is reminiscent of Sumerian mathematicians: a life is not enough to collect all data needed to build a model. The solution adopted by the Sumerians was to introduce a form of language, mathematics, and a communication channel, writing. This, as described in the previous chapter, increased their ability to store and process information as a network of individuals. In short, it created an intelligent system that survived longer than the single components of the same, and even longer than the system itself: while the Sumerians have disappeared, their math lives on.

The invention of mathematics and writing can be compared to the introduction of synapses by neurons. Spoken language in fact, on its own, has some clear limitations, as it did for early neurons communication through the diffusion of chemicals. It’s difficult to imagine Homo sapiens communicating verbally with 7000 or 20 million people, respectively the average and highest number of neuron synapses in our brain.

Homo sapiens, as Dunbar (1992) proved in his study on the relationship between the cortex and the size of communities, can’t connect to more than about a hundred other individuals. If mammals, and Homo first and foremost, managed to use the network effect for leverage, despite the small size of the same, this is also and especially thanks to the impressive cognitive abilities of individuals.

As mentioned above, the brain was originally a processor of information, and evolved into a communicator, in the same way as the neuron was originally a detector of edible material that evolved into a communicator cell.

In practice, social networks of mammals are mainly a multiplier of individual intelligence. They are not a new entity, a real meta-organism, from the word μετά: “after”, “beyond”.

More Communicans than Sapiens

In an information-energy context, it’s this thrust towards more intelligent cognitive networks, systems that can process more information in order to obtain more energy from the environment, that drove the Homo brain to become constantly more powerful for millions of years.

But as mentioned in the previous chapter, it’s a substantial investment for a mammal to keep its brain functioning. Increasing cognitive ability more than Homo sapiens has done would be risky and impractical in energy management terms.

According to Hofman (2014), Homo sapiens brain could reach its maximum processing power, approximately 50% more than it has today, by increasing its volume by 130%. If so, it would weigh almost 3.5 kg compared to the 1.5 kg it weighs today.

Leaving aside the question as to whether our organism could physiologically sustain a greater brain mass, it’s obvious that 100,000 years ago a brain that was a bit more powerful but in proportion required a lot more energy, would have been nothing more than a risky liability.

So, it was a much better idea to develop communication abilities, creating the necessary specific instruments: doing the same thing as neurons, that created synapses and dendrites in order to communicate more effectively.

Put simply, the first instrument of communication to emerge after language, was writing. The written word, at least in the Fertile Crescent, was probably developed to keep accounts in the taxation system: the oldest example of writing is a Sumerian income statement, not a poem (Lerner 2009).

Writing was invented to boost the cognitive abilities of some brains, those of the Sumerian civil servants-mathematicians, so they could create small cognitive networks.

As communication developed through writing a change began: from storing and processing information as an individual to doing so as a network. This became evident to some Homo sapiens already thousands of years ago. In Plato’s Phaedrus, Socrates exclaimed:

…this discovery of yours [writing] will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust the external written characters and not remember themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

It’s a concept that’s repeated over the centuries, every time a new instrument of communication emerges. In 1492 the German Renaissance polymath Johannes Trithemius published De laude scriptorum manualium (In praise of scribes). Trithemius considers manual writing to be a form of higher learning:

[Writers,] while they are writing on good subjects, is by the very act of writing introduced in a certain measure into the knowledge of the mysteries and greatly illuminated in his innermost soul.

But as Ziolkowski (2011) mentions, “Trithemius himself was no foe of printed books”, and ironically, the treatise has survived till today only as a printed work. The same is true for the words of Socrates, which we can read only because his pupil Plato transcribed them.

If we continue to follow evolution in instruments of communication, the telegraph first and then the telephone were received in the same way: they made it possible for a global society to emerge, but were also attacked.1

At the end of the nineteenth century the American writer C. Harris wrote:

At present our most dangerous pet is electricity – in the telegraph, the street lamp and the telephone … The telephone is the most dangerous of all because it enters into every dwelling. Its interminable network of wires is a perpetual menace to life and property. In its best performance it is only a convenience. It was never a necessity (Harris 1889).

Similar stands were taken against the radio, television and, obviously, against communication through interconnected networks managed by electronic calculators, the Internet, and the subsequent development of graphical interface for non-professional use, the World Wide Web.

Perhaps, the first Homo who used a form of spoken language to settle controversies in a group was criticised by the older generation too. How much less “humanity” was there in communicating verbally, at a distance, compared to the physical contact of grooming, of reciprocally cleaning each other’s fur? We’ll never know, because the older Hominidae probably conveyed their disapproval with a few grunts of disgust, and that was the end of that.

Socrates, Trithemius and those who today complain about the externalisation of our memory from the brain “to Google” have a point though. Language, writing, print, have resulted in societies able to extract more energy from the environment, to such an extent that we can now sustain a population of several billion Homo, not necessarily any wiser.

As mentioned in Chap.  3, quoting Rousseau’s On the social contract, the more society acquires cognitive abilities, the more the individual becomes insignificant in relation to the rest.

The more communicans Homo becomes, the less he is worthy of the name sapiens.

Communications Technologies and Topologies

On the basis of the brief overview of communications technologies in the previous section one might think there had been a constant improvement in means of communication, as communications channels have constantly increased. But it’s not that simple.

Despite the fact that today there’s more focus on “bandwidth” in economic development policies, it’s also, if not above all, the topology of the network that determines the amount of information a network can process.

In this sense, instruments of communication invented by humans, regardless of the capacity of the channel, ie the amount of information/syntaxes that can flow from one element to the next, can’t be considered a constant improvement in information processing capacity for society.

The C. elegans neural network analysed in Chap.  3 is nothing special in terms of amplitude of the communication bandwidth between neurons, or number of nodes. But as we’ve seen, this type of network makes the processing of sensory input surprisingly accurate.

The first societies based on spoken language in a certain sense represent the equivalent of the elegans proto-brain: each person can exchange signals with any other member of the group.

The same thing can’t be said of the written word. On the one hand, writing is in itself a way to store information, with the obvious advantage that something written today can be read centuries later, as it happened to Socrates. It can therefore be used to create information networks that develop in time, and not just in space.

But on the other hand writing comes with some distinct disadvantages, including the externalisation of memory. It is for example quite an inflexible form of memory, a bit like DNA. Languages that aren’t officially used in a written form are more plastic than written ones (Hollenstein and Aepli 2014). Languages without a written tradition are not less expressive than official languages, quite the contrary. Consider the importance of dialect poetry in countries that have an official language such as Italy, or the acknowledgement that “the highly verbal” African American Vernacular English “is famous in the annals of anthropology for the value placed on linguistic virtuosity” (Pinker 2003).

The fact that in Switzerland 80% of the population uses a non-written language as their first language (this is the percentage of people who speak Swiss German and Italian dialects) maybe due to the “early and long-lasting interest in pragmatism” in the country (Tröhler 2005).

The Latin saying verba volant, scripta manent can be interpreted in two ways: written words are of more certain interpretation than spoken ones, or spoken words let you fly away, written words keep your feet on the ground.2

Regardless of the speculations concerning the connections between the nature of a people and the use of writing, what’s certain is that writing started a transformation of human social network. Whereas before, one person could have bidirectional contacts with about one hundred others, now one single written work created a one-way channel between writer and readers. Writing is a “one-to-many” communication system. Intellectual currents do form around written works, but societies cannot participate as a whole. First writing, and then printing, in the best case scenario create small connected sub-networks (the intellectuals) whose ideas propagate until they eventually touch the society.

The situation changed radically with the invention of the telegraph, for two reasons. The first is that, for the first time in the history of complex organisms, these organisms managed to communicate with each other at speeds close to that reached by the nervous system. The second is that the telegraph made it possible to create a network that was topologically similar to neural networks, in which each individual could connect to all other individuals.

Could connect” because in reality the telegraph is an instrument that’s problematic to build and keep working over long distances, and it’s also complicated to use. But it was still an embryonic nervous system, as was obvious to Carl Friedrich Gauss almost two centuries ago.

Gauss, who as well as being one of the most brilliant and prolific mathematicians and physicists of all time, never turned down an opportunity to experiment, in 1833 wrote to the astronomer Heinrich Wilhelm Matthias Olbers: “I don’t remember any mention to you about an astonishing piece of mechanism we have devised.” (Dunnington et al. 2004).

That piece of mechanism was the telegraph, which Gauss and his young colleague Wilhelm Eduard Weber had invented, built and installed so they could quickly communicate between the observatory and the institute of physics. Gauss and Weber published their results in German in the Göttingische gelehrte anzeigen (Dunnington et al. 2004). Despite the initial interest shown by politicians (it was presented to the Duke of Cambridge), the invention wasn’t a success, also because after the two inventors proved the feasibility of the project they didn’t have time to dedicate to its industrial development.

But the importance of the discovery was clear to Gauss from the start. In 1835, the scientist wrote to his ex-student Heinrich Christian Schumacher: “The telegraph has important applications, to the advantage of society and … exciting the wonder of the multitude”. In the same letter he also mentioned the estimated investment necessary to lay the required wires around the world: 100 million thalers.3 He concluded by mentioning that he had found it easy to teach his daughter to use the instrument (Dunnington et al. 2004).

At the time, no one shared Gauss’s enthusiasm, and the telegraph was reinvented and developed by others. It’s easy to understand Gauss’s contemporaries, as it must have been hard to imagine how two needles miles away that move in synchrony with each other could one day “excite the wonder of the multitude”.

What Gauss was rightly enthusiastic about was the possibility of communicating instantaneously between one place and another on the planet. To create, to all effects and purposes, a network of electrical synapses in which the people are the neurons. It took humanity almost 200 years to evolve the telegraph into something that could arouse the interest of the masses. This was because the necessary investment was huge, and because the technology wasn’t easily scalable. A wire one mile long is one thing, a network of wires around the planet, a “world wide web”, a completely different concept.4

But all things considered, the result is the same: each element can communicate in a bidirectional way, almost instantaneously, with a very high number of other elements, which is what matters in terms of processing information in a network.

This means the telegraph and its spin-offs are perfect to create the “next level” of intelligent systems: they can transform a social network into a network that’s topologically similar to a network of neurons, with similar cognitive abilities.

Systems such as radio and television on the other hand, with their one-way star structure (a few transmitters and many receivers) are exactly the opposite. They’re used to connect one node to others and not vice versa.

If we consider this in terms of Hebbian learning, radio and television don’t let the network learn much: there’s one single central neuron, connected to peripheral neurons, that acts in a relatively independent way. As predicted by Hebb, connections will be created between peripheral neurons only because they’re in tune with the central neuron, which occurs often, considering its power.

So Hebbian learning, considering the topology of the network, was responsible for the fact that radio was a highly effective instrument of propaganda for early twentieth-century governments. As Joseph Goebbels said soon after the Nazi party came to power in Germany in 1933: “It would not have been possible for us to take power or to use it in the ways we have without the radio…” (Adena et al. 2015).

Internet Companies

When we talk about Artificial Intelligence we mostly think of companies like DeepMind, an Alphabet Inc. (former Google Inc.) subsidiary, with neural networks that can learn to play Go or video games, or certain products like chabots and speech-to-text, rather than algorithms that are currently used by Google, Facebook, Amazon and Netflix (the FAANGs excluding Apple, or the FANGs) to make a fortune.

Although the technology is very advanced, algorithms, in terms of mathematical sophistication, are less so. But the ability to extract a great deal of information is notable, and this gives these companies a huge advantage over everyone, governments included.

One of the most obvious examples is Alphabet Inc., which continues to generate a healthy revenue through the Google search engine. Google’s ability to find the most influential nodes in a network is based on a brilliant algorithm published by the two founders. This algorithm, called PageRank, calculates the central position of the nodes (Brin and Page 1998) in a very precise way, and could finally be implemented by Google’s engineers on a scalable and cheap infrastructure. It really was a silver bullet. But it wasn’t just the PageRank that made Google one of the most successful companies in the history of finance.

PageRank and its implementation allowed Google to offer an excellent service, fulfilling its mission: “To organise the world’s information and make it universally accessible and useful”.

But what makes Google still today such a successful company is the application of smaller strategies, much less sophisticated than PageRank, used to extract energy (revenue) from the external environment. An environment which, in this case, consists of its users and the companies who want to reach those users.

Today, Google has managed to make the dreams of some CERN researchers of the 1990s come true: to make people pay a fee for each site they browse.5

In its early years, Google effectively put its mission of organising information into practice. The compromise was that the service wasn’t monetised through payments (the founders’ initial idea), but by proposing sponsored links, always pertinent to the user’s search and clearly separated from the main results.

People looking for “Lawyers in Paris” would find various websites ranked by relevance, plus others, clearly highlighted as “sponsored links”, relegated on the right of the page. Sponsored links were ranked by relevance and by the amount of fees paid to Google.

The breakthrough arrived with the analysis of user behaviour. The company noticed that many of the search queries it received weren’t keywords, but names of web site. In other words, people who wanted to visit the site of the company ACME, whose web address was, didn’t type such address in their browser. They would google “ACME” and then click on the first link proposed, which infallibly (and easily) was redirecting them to

This is when Google started having the signal “Sponsored links” changed to a timid “Ad”, and serving those paid links on top of the real results. In this way, ACME, in addition to optimise its content according to what Google wants, has to pay to be on top of the paid links.

ACME must pay, because if it does not pay enough, competitors might appear at the top.

How little importance Google puts on linking the user to the web site they actually wanted to reach is clear in Fig. 5.1. In this case, the first result in a search for “SK Traslochi” is the competitor “traslochi 24”. The link to the competitor is labeled “Ann.”, which has no meaning whatsoever in Italian, surely not “Sponsored”.6 Users looking for “SK Traslochi”, used to trust Google, are easily tempted to click on the competitor’s link. That is evil, and is far from Google’s original mission.
Fig. 5.1

Google proposing a competitor’s website as first (sponsored) link

Google has other revenue channels apart from its search engine, all based on analysing user behaviour. For example, thanks to a “free” tool used by websites to analyse web traffic, called Google Analytics, a huge number of websites7 send to Google detailed information on who is reading what.

It is interesting to read in the book written by Google’s first director of marketing, Douglas Edwards (2011), how the idea to just insert sponsored results was initially considered immoral by the company’s employees.

Now, however, Google gives companies paid-for visibility: if you want to appear as the first link when your customer google your company’s name, you have to pay. If your competitors have a good Search Engine Optimization expert, you might have to pay a lot.

The model described above is in line with the real mission of Google’s holding, obvious if we look at the name: Alpha-bet, a bet on alpha, the symbol used in finance as a measure of the excess return of an investment in relation to the market benchmark.

Alphabet’s raison d’être isn’t to provide the perfect website, but rather a website that will satisfy the user, making them pay for it, albeit in an indirect way. There is no such thing as a free lunch: users think they’re not paying, but companies have to pay for the user to find them, and as a consequence must increase their prices to compensate for these additional costs.

With Google Analytics, companies think they don’t have to pay for a service. They do have to pay Alphabet for their products and services to be displayed to their target users though, something Google is well aware of. Google, in fact, collects non-aggregated browsing data, in other words it knows exactly which person visited the website, but it only provides Google Analytics users with aggregated data. What’s more, Google can cross-reference visit data with search data, to identify the user profile precisely.

Similar strategies are used by Facebook to circulate posts, and by Amazon after it introduced “sponsored products”.

On Regulating the Private Sector

As Yoshua Bengio says, when powerful algorithms are used exclusively for company profit, this creates dangerous situations:

Nowadays they [the big companies] can use AI [Artificial Intelligence] to target their message to people in a much more accurate way, and I think that's kind of scary, especially when it makes people do things that may be against their well-being (Ford 2018).

Bengio isn’t exaggerating, although it does not mean that the role of these companies has never been beneficial.

While the public sector was responsible for the impetus behind the creation of the World Wide Web, it was the private sector – companies like Google – that made the invention useable by the masses. Reading Edwards (2011), one sees Google during the first years of existence as a community of hackers,8 whose purpose was effectively to organise global information and, above all, solve the technological problems that made indexing billions of web sites more and more difficult.9

This vocation for problem solving, for “making the world a better place”, is one of the mantras of all technological start-ups, right up to the day they’re listed on the stock exchange. As Cringely (1996) explains, IT companies have to aim to be listed on the stock exchange not to acquire capital (in relation to any other industry, IT is not very capital intensive), but to liquidate workers, who are paid in company stock options. The workers have to be paid in stock options because the good old hackers were inclined to leave the company after solving the first intellectually stimulating problem they were given, to look for another problem. It’s only the mirage of millions of dollars that lets these companies keep the first wave of creative minds, the ones that can solve the most challenging problems, on their payroll for more than a couple of years.

When the company is listed on the stock exchange, the first wave of hackers jump ship, and the company organisation is set up. But the price of the shares on the market must always continue to grow, because if it doesn’t there’ll be an excessive brain drain and, as a consequence, a loss of capital. A vicious circle that would destroy the company.

In short, the company’s mission becomes: to increase revenue.

Google found itself at the right place at the right moment: it received major funding just before the new economy bubble burst in 2001 (Edwards 2011). This let the company scrape together the finest computer scientist and solve problems that had until then been considered unsolvable.

If, as Tim Berners-Lee wrote, “…people say how their lives have been saved because they found out about the disease they had on the Web, and figured out how to cure it”,10 the credit goes to Google too. But when Edwards (2011) describes Google’s employees being told the company would be listed on the stock exchange, we see kids who have won the lottery rather than people who want to make the world a better place. The result is that, as Berners-Lee said on the 30th anniversary of his proposal for an information management system at CERN in Geneva, “user value is sacrificed, such as ad-based revenue models that commercially reward clickbait and the viral spread of misinformation”.11

In the west, as well as the above-mentioned technology giants listed on the stock exchange, there are also new entries that have taken advantage of the power of the Internet to solve real problems, as Google did in the 90s.

One example is AirBnb. There are many advantages for the economy, as confirmed by independent studies (Quattrone et al. 2016) and, of course, by AirBnB itself. The services offered by AirBnb in some cases are more efficient than the services offered to tenants by the state. In Italy for instance, unpaid rent amounts to 1.2 billion euros per year, and it’s hard to imagine recovering the debt you are owed in less than a year.12 With the law incapable of guaranteeing fulfilment of a contract, AirBnB is considered the only viable option for renting out a house.

AirBnB is in the same situation that Google was in around the year 2000, or Facebook a few years later, but if left unregulated there’s nothing to guarantee it won’t represent the same risks as Google and Facebook today.

Facebook, for example, has developed a technology that’s ideal for someone who wants to make harmful information go viral. Let’s take vaccines for example. Citizens/users find themselves torn between two contrasting sources of information: on the one hand medicine, which wishes to assure them that the probability of infection is minimised; on the other, a user or organisation which, in good faith or not, spreads the word that vaccines are harmful.

A Facebook user, the father or mother of a child, who in their timeline sees a post entitled “Vaccines reduce the probability of infection in children” won’t give it much consideration. But “Vaccines cause autism” shocks the user, who hesitates while scrolling the timeline. The Facebook algorithm is not designed to minimise deaths from infection, but user engagement, and slowing down while scrolling means more engagement.

This is enough to display the post against vaccines to other users too and hide those promoting vaccines: the time each user spends on the application, along with the number of users, is one of the most important parameters for investors. The number of deaths of children who weren’t vaccinated don’t appear in investor relations.

Facebook acknowledges that some of its users create fake news for economic gain,13 but does not acknowledge that Facebook Inc. also benefits from fake news. Facebook makes profit every time the user clicks on a sponsored link, regardless of the content. Posts containing disinformation are the perfect instrument for identifying the ideal user for those who want to sell you something, first and foremost politics: always on the lookout for gullible people.

Let’s take Italy, the country that in the last century was the testing ground for the rise and consolidation of fascism. In the bel paese, the two political parties that formed the Italian government in May 2018 (the Five Star Movement and Lega Salvini Premier) both promoted anti-vaccination policies while in power (Sole 2018), (Repubblica 2019).

Some Italian Five Star Movement members of parliament publicly uphold the existence of chemtrails and Judeo-Masonic conspiracies, not to mention the fake moon landing and mermaids14.15

Populist parties feed off gullibility. There are no limits to the pre-electoral promises a “flat earth” voter will believe, or to the excuses given, without fail, after the elections.

Once gullible users have been identified, they can be targeted with the most unbelievable messages, from immigration being the cause of the economic crisis to the phantasmagorical profit to be made from exiting the European Union.

Not by chance, the Five Star Movement is the political arm of a communications company. One that understood before others how to use digital channels (Casaleggio 2008). Likewise, the Lega Salvini Premier makes use of a seasoned digital communications’ team (Espresso 2018).

The spectrum of disinformation is wide, and obviously Facebook and other platforms are not the only cause of disastrous political decisions. For example, a variety of factors made 17 million British citizens vote in favour of Brexit (Kaufmann 2016). But the 350 million pounds/week that Brexit should have speared to the British economy (Rickard 2016) remains a masterful ruse, or a criminal way to use digital communications’ channels, depending on how you look at it.

Facebook is unrivalled in being able to find gullible user segments. The statistical analysis done academic to find out which human profiles voted for Brexit (and therefore also their reasons) are certainly sophisticated, but also based on a ridiculously small sample when compared to the “big data” of the company. Two examples: Kaufmann (2016) reached the following conclusion “primarily values … motivated voters, not economic inequality” after analysing the results of a survey that interviewed 24,000 people. Swami et al. (2017) reached the conclusion that people who believe in Islamic conspiracies are more likely to vote for Brexit thanks to an analysis of an opinion poll with 303 participants.

Let’s compare this figure with Facebook use in the UK at the time of the referendum: 37 million users,16 with up to 2 hours per day spent using the app.17

Technology giants have more data on habits, behaviour and opinions than any other human organisation. In 2017, “only around 43% of households contacted by the British government responded to the LFS [Labour Force Survey]”, a survey which is used to prepare important economic statistics in Great Britain.18

On the contrary, Facebook and Google know where users are, who they are acquainted with, what they are watching. Google users, through a simple search, tell Google their wishes and problems, things they probably haven’t told anyone else, and they didn’t have to answer even one survey question to do so.

The problem with big Internet companies is that their organisation and capabilities could almost be considered those of a global brain. Their behaviour however cannot be considered in the same light.

Behind the success of complex organisms’ brains there’s always a balance of costs-benefits. As mentioned in the previous chapter, the expensive tissue hypothesis proposes the idea that the human brain, becoming more and more costly in energy usage terms, made the organism sacrifice part of its essential organs such as the digestive system or the locomotor apparatus. But this didn’t create problems for the organisms, quite the contrary.

If we look at the brain as an independent system, the brain always considered itself to be part of the organism, and always identified the environment outside the organism as the source of energy it needs to survive. The organism is an organisation of collaborating organs. They brain does not feed itself at the expense of other organs.

Big Internet companies on the other hand, see human society as the environment from which they extract energy. In the best case scenario they can be compared to parasites – foreign organisms that feed off their host. In the worst case scenario they are like tumours – sub-organisms that grow out-of-control, and that in order to maintain their level of low entropy are willing to sacrifice the very life of the organism of which they themselves are part.

This might seem excessive, but the number of deaths of people who haven’t been vaccinated could be just the tip of the iceberg. This is the mechanism with which new nationalist, populist or openly fascist movements including Donald Trump in the US, the Five Star Movement and Northern League in Italy, Narendra Modi in India (the first to use WhatsApp,19 a service owned by Facebook, in politics) and Jair Bolsonaro20 in Brazil came to power. They all exploited so-called social networks.

Trumpeting about making the world a better place, the big tech have become similar to their parodies: in the Silicon Valley TV series, Gavin Belson, the CEO of the fictitious Hooli Inc., clearly based on Google, says: “I don’t want to live in a world where someone else makes the world a better place better than we do.”

In conclusion, on the one hand there are companies that provide services, from web indexing to renting homes, machines and hiring labour, in a much more efficient way than states.

On the other hand, there’s the problem that the ultimate aim of these companies is to increase revenue, whatever the cost. The aim is not to improve the lives of the people.

Probably and hopefully in the future both the value of Internet companies and the need to regulate their actions will be acknowledged, in exactly the same way as is done for water and electricity21 today. In practice, this will force these systems to acknowledge their role as part of society.

The Evolution of Artificial Intelligence

While artificial and natural neural networks have some things in common, in the context of this book it doesn’t make much sense to ask oneself if one day artificial neural networks will be more intelligent than Homo sapiens.

Artificial intelligence might have been created by Homo sapiens, but it is no less natural than its creator, or any mechanism that other forms of life, as intelligent systems, developed to extract energy and feed their cognitive abilities. It is always natural evolution which led to the introduction of the C. elegans neural network, and “artificial” neural networks a few million years later.

One thing it does make sense to ask is why Artificial Intelligence systems emerged, why in this form, and what role will they play in the evolution of life on earth. Or rather: how will their role evolve, in consideration of the fact that they already play an essential role in our society.

The reason why we can’t remain indifferent to artificial neural networks is that, unlike other Artificial Intelligence systems, they are incredibly autonomous systems. It is as if Homo sapiens effectively created a sort of brain, and then the brain learnt on its own. Mathematicians create the structure and expose it to the environment. The structure, autonomously, not only learns, it adapts so it can represent the environment.

Considering how the cognitive ability of artificial neural networks can evolve autonomously, it’s essential we realise why we reached this point, and which direction we might take.

An important aspect of artificial neural network is that Homo sapiens, scientist or not, has little to say on how the system will behave. Today, data is the true mine of information, and no longer the mathematical model making sense of those data. When you have enormous amounts of data, mathematical sophistication becomes less important.

This is something old-school scientists, like physicists, have had difficulty recognising, contrary to computer scientists. In the aptly titled “The Unreasonable Effectiveness of Data”, three famous researchers (Alon Halevy, Peter Norvig, and Fernando Pereira (2009), all Google employees) refer to Wigner’s book (1960) mentioned in the previous chapter.

The article was written when the academic world, and not only, had accepted a return to neural networks, after the “long winter” of the 1980s and 1990s. Although the article doesn’t mention neural networks, it predicts exactly what neural networks would soon be capable of in the near future: thanks to the analysis of a significant amount of data, machines would be able to perform tasks that had been unimaginable until then. The sophistication of the mathematical model that explains the phenomena is less important than being able to predict them.

At the end of the day, that’s all intelligent systems have to do. Creating mathematical models is a characteristic of no other complex organism except very few Homo sapiens. Most intelligent systems reduce uncertainty using mechanisms similar to neural networks, i.e. without trying to understand why. It’s not surprising therefore that in artificial neural networks we’ve developed something that works, in cognitive terms, in quite a similar way to other intelligent systems, like the C. elegans22 brain. As of today, with big limits though.

The algorithms used today are mostly classifiers. In other words, attempting to maximise a gain function the algorithm put an input in a probability box: label “A” has a certain probability of being true, label “B” another probability and so on.

In a game of chess, the gain function is winning. After having analysed hundreds of millions of games, a neural network can predict that a certain move will increase the chances of winning. But we’re still a long way from being able to say machines are sapiens though.

In 1950, Alan Turing (1950) developed what today is called the “Turing test”: a machine exhibits human cognitive abilities if a human can interact with it (by chatting for example) without being able to tell they are talking to a machine. As some people say, either the machine is very competent, or the person is not. In any case, it does not seem to be a definitive test for measuring how “human” and algorithm is. And typically, algorithms in Artificial Intelligence are not build to behave as humans. In Artificial Intelligence we look for other capacities, that humans don’t possess.

If Ke Jie, the Go champion who lost to AlphaGo (Chao, 2018), had played against a remote machine, he probably wouldn’t have known his opponent was not human. But this doesn’t change the fact that AlphaGo does not learn or think like a human being: a person learns to play Go in just a few minutes, by simply reading the rules. If Ke had had to teach the machine to play Go, he would have immediately realised there was not a person on the other end of the line. The machine will never be able to think like a Homo sapiens, for the simple reason that this is not its purpose.

For scientists, entrepreneurs and, above all, investors, it would not make sense to invest time and money on a synthetic brain that is the same as a human one. You might as well hire a person. With increasingly more data produced around the globe, what does make sense is investing to have something that can extract information from a huge amount of data, and use it. Something humans find it very hard to do.

As Berners-Lee wrote with Hendler and Lassila in 2001, “The Semantic Web will enable machines to comprehend semantic documents and data, not human speech and writings.” Computers will be able to take bookings, but they won’t understand what a hotel is.

The fact that a software uses our own language does not necessarily mean it also has the same internal representation of reality. We have a human representation of reality, based on our history and evolution, computers don’t. The advantage of neural networks is that they learn to do “the right thing” without needing an operator to program the logic, but this is a weakness too. A computer for which the right thing is making paper clips, won’t stop until everything is clips (Bostrom 2003). A computer trained to win at Go doesn’t have the necessary sensibility to teach the game to a child. A computer trained to generate revenue won’t stop if a few people die of infection, or if after thousands of years of war and just one century of peace, a continent, Europe, risks falling into nationalist chaos. Once again.

Earth today, as Gauss imagined, has been almost all completely wired and connected. Soon, probably, there won’t be any isolated area. Every thing, and every person, everywhere, will continuously be connected to the Internet thanks to a network of satellites.23 The amount of information managed by Internet is already beyond the scope of Homo sapiens, but it will soon explode.

There is not only the statistical certainty, considering the evolution of life in the past, that the cognitive abilities of this meta-organism will exceed those of every individual, but also the logical certainty: this book was written precisely to explain why intelligent systems must, at a certain point, aggregate into a system able to process more information.

If the human organism has reached its information processing limit, the only thing it can do to survive is to create information networks of human organisms, and start processing the information “outside” the single element. Homo sapiens started doing this a few million years ago when some kind of language was introduced, but that has now become more extreme, with the introduction of a global neural network.

The emergence of the human meta-organism gives us sapiens the feeling, as Harari (Atlantic 2018) rightly says, that we are part of society but don’t have a real role. Like neurons of the brain, we’re increasingly more part of a meta-system, which protects and feeds us, but are less free to learn, to process information, and insignificant in relation to everything else. Just as neurons learnt to communicate with only a few signals, some Homo sapiens recently abandoned a language of complex syntax to start using smiley hearts and hashtags.

Our ability to adapt lets us transform ourselves into organisms that can communicate quickly, but are less able to process information. To become cells of this meta-organism, working readily for the survival of the same, without knowing why. In this meta-organism, like the cells of a complex organism, safety increases while freedom becomes a thing of the past.

More and more, the medium is the message: the ability of software to associate words with events in our lives, to use our language as a medium, is mistaken for intelligence. This does not mean, however, that conversational User Interfaces, so called chatbots, are not going to be “the next big thing” after web sites, blogs (web 2.0), and social networks. They most probably will be.

The risk is that sapiens-sapiens communication will become a thing of the past, and machine-machine communication will be the driving force of the meta-organism. But those machines will still have absolutely no real understanding of our reality. Can we really put our lives in the hands of such systems, even more if their only mission is increasing revenues?

Until today, scientists, with all their weakness but thanks to their obsessive quest for rationality, have helped Homo become the dominant species on earth. Whoever started mastering fire (“The Greatest Ape-man in the Pleistocene,” as Roy Lewis’ (1960) masterpiece title was translated in Italian), Aristotles, Galileo, Newton, Enrico Fermi or Tim Berners-Lee, you name it – their first goal has never been ruling the world or make a fortune.

But they produced tools whose power could be easily exploited by second-class scientists: almost all founders of today’s big tech have started their careers as scientists. Bertrand Russell (1971) used to say that “no transcendent ability is required in order to make useful discoveries in science”. But, he added, the person “of real genius is the person who invents a new method.” Inventing a new medium of communication, the World Wide Web, is genius. Exploiting it through the PageRank is brilliant.

Indeed, it is relatively easy to make an atomic bomb too. But because governments recognise the danger of atomic bombs, access to materials that can be used to make those weapons is controlled at a global level, to minimise the risk of a nuclear catastrophe. Perhaps it might soon be wise to control the development of Artificial Intelligence too.

So-called Artificial Intelligence might actually help us live a better life. For this though, we need scientists who are more architects than engineers; artists, or sociologists, rather than technicians obsessed with the search for optimisation.

If things continue the way they are going, the race to create a global neural network will probably be very similar to a new form of nuclear energy. A technology which can improve the quality of life by producing cheap electricity with a low environmental impact, but that was initially used to kill millions.

The difference though compared to nuclear energy is that, those who invented Artificial Intelligence have less and less control over it, and the technology is becoming a necessity, much more than nuclear power years ago. It can’t be simply swept under the carpet and forgotten. The use of Artificial Intelligence, as the emerging fascism reminds us, risks being an autoimmune disease – the defence systems attacking the organism itself, rather than its enemies – which could lead the human race to unleash the last fatal attack: against itself.


  1. 1.

    Thanks to David Malki for his blog where all the examples quoted were found.

  2. 2.

    I was told the second interpretation by my late father, who loved Latin and Ancient Greek.

  3. 3.

    Considering the observatory’s budget was 150 thalers (which Gauss was known to complain about), the investment would be about 100 billion dollars today, a more than reasonable estimate.

  4. 4.

    The telegraph in topological terms can be considered to be a network that’s similar to that of the brain, but in terms of physical connections it’s different, and this made development difficult: if a telegraph can connect to 10,000 other telegraphs it can’t have 10,000 outgoing connecting wires, like the synapses of a neuron. Hubs are required to route the communication between two elements: this was done for the telephone by switchboard operators, until a few decades ago, and is now automated using specific computers called routers.

  5. 5.

    At the time, I heard many such complaints from my colleagues, who didn’t realise that, if the Web won over all other solutions (does anyone remember Gopher?), it was precisely because it was free of any commercial license, see (Berners-Lee and Fischetti 1999).

  6. 6.

    Verified 7 April 2019.

  7. 7.

    Google does not provide any figures on the number of websites using its product

  8. 8.

    “Hackers solve problems and build things, and they believe in freedom and voluntary mutual help.” Eric Raymond in How To Become A Hacker,, verified 12 April 2019.

  9. 9.

    “The Friendship That Made Google Huge” by James Somers, The New Yorker, December 3, 2018

  10. 10.

    Tim Berners-Lee, “Answers for Young People”,, checked on 12 April 2019.

  11. 11.

    Tim Berners-Lee, “30 years on, what’s next #ForTheWeb?”,, verified 12 April 2019.

  12. 12.

    La Stampa (Italian newspaper), 14 August 2018, tenants in arrears, a portal for recovering unpaid rent, verified 12 April 2019.

  13. 13.

    “We’re getting rid of the financial incentives for spammers to create fake news -- much of which is economically motivated.” Mark Zuckerberg, Second Quarter 2018 Results Conference Call,

  14. 14.

    “Carlo Sibilia, the Five Star Movement’s conspirationist, the new Interior Ministry Undersecretary”, La Repubblica, 13 June 2019, checked

  15. 15.

    “Secrets and chemtrails: the long list of Five Star conspiracies”, Espresso, 26 September 2014, verified 9 April 2019:

  16. 16.

    Forecast of Facebook user numbers in the United Kingdom (UK) from 2015 to 2022,, checked 9 April 2019

  17. 17.

    Average daily usage time of Facebook in the United Kingdom (UK) 2014, by age and gender, checked 9 April 2019,

  18. 18.

    The Economist, 24 May 2018, “Plunging response rates to household surveys worry policymakers”,, verified 9 April 2019

  19. 19.

    “India, the WhatsApp election”, The Financial Times, May 5, 2019.

  20. 20.

    “How social media exposed the fractures in Brazilian democracy”, The Financial Times, September 27, 2018

  21. 21.

    The Economist, 23 September 2017, “What if large tech firms were regulated like sewage companies?”,, verified 12 April 2019.

  22. 22.

    One of the reasons why it was difficult for artificial neural networks to emerge is because a system that incorporates the programmer’s logic (an “expert system”) functions immediately, without the need for training. But it’s more difficult for them to learn new strategies when the environment changes.

  23. 23.

    “Satellites may connect the entire world to the internet”, The Economist, December 8, 2018


  1. Adena, M., Enikolopov, R., Petrova, M., Santarosa, V., Zhuravskaya, E. (2015). Radio and the Rise of the Nazis in Prewar Germany. The Quarterly Journal of Economics, 130(4), 1885–1939.CrossRefGoogle Scholar
  2. Berners-Lee, T., Fischetti, M. (1999). Weaving the Web: The original design and ultimate destiny of the World Wide Web by its inventor. DIANE Publishing Company.Google Scholar
  3. Bostrom, N. (2003). Ethical issues in advanced artificial intelligence. Science Fiction and Philosophy: From Time Travel to Superintelligence, 277–284.Google Scholar
  4. Brin, S., Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer networks and ISDN systems, 30(1–7), 107–117.CrossRefGoogle Scholar
  5. Casaleggio, D. (2008). Tu sei rete. La Rivoluzione del business, del marketing e della politica attraverso le reti sociali.Google Scholar
  6. Cringely, R. X. (1996). Accidental empires. New York: HarperBusiness.Google Scholar
  7. Dunbar, R. I. (1992). Neocortex size as a constraint on group size in primates. Journal of human evolution, 22(6), 469–493.CrossRefGoogle Scholar
  8. Dunnington, G. W., Gray, J., Dohse, F. E. (2004). Carl Friedrich Gauss: Titan of science. MAA.Google Scholar
  9. Edwards, D. (2011). I’m feeling lucky: The confessions of Google employee number 59. HMH.Google Scholar
  10. Ford, M. (2018) Architects of Intelligence: The Truth about AI from the People Building It. Packt PublishingGoogle Scholar
  11. Harris, W. C. (1889) Nature, Vol. 1: A Weekly Journal for the Gentleman Sportsman, Tourist and NaturalistGoogle Scholar
  12. Hofman, M. A. (2014). Evolution of the human brain: when bigger is better. Frontiers in neuroanatomy, 8, 15.CrossRefGoogle Scholar
  13. Hollenstein, N., Aepli, N. (2014). Compilation of a Swiss German dialect corpus and its application to pos tagging. In Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects (pp. 85–94).Google Scholar
  14. Kaufmann, E. (2016). It’s NOT the economy, stupid: Brexit as a story of personal values. British Politics and Policy at LSE.Google Scholar
  15. Lerner, F. (2009). The story of libraries: From the invention of writing to the computer age. Bloomsbury Publishing.Google Scholar
  16. Lewis, R. (1960). The Evolution Man, or, How I Ate My Father. Pantheon. Translated in Italian as “The Greatest Ape of the Pleistocene” or “Il più grande uomo scimmia del Pleistocene”. AdelphiGoogle Scholar
  17. Mallory, J. P., Adams, D. Q. (Eds.). (1997). Encyclopedia of Indo-European Culture. Taylor & Francis.Google Scholar
  18. Pinker, S. (2003). The language instinct: How the mind creates language. Penguin UK.Google Scholar
  19. Quattrone, G., Proserpio, D., Quercia, D., Capra, L., Musolesi, M. (2016). Who benefits from the sharing economy of Airbnb?. In Proceedings of the 25th international conference on world wide web (pp. 1385–1394). International World Wide Web Conferences Steering CommitteeGoogle Scholar
  20. Rickard, S. J. (2016). Populism and the Brexit vote. Comparative Politics Newsletter, 26, 120–22.Google Scholar
  21. Russell, B. (1971). Mysticism and Logic, and Other Essays. Barnes & Noble.Google Scholar
  22. Tröhler, D. (2005). Langue as homeland: The Genevan reception of pragmatism. In Inventing the Modern Self and John Dewey (pp. 61–83). Palgrave Macmillan, New York.CrossRefGoogle Scholar
  23. Turing, A. (1950), ‘Computing Machinery and Intelligence’, Mind 59(236), pp. 433–460.MathSciNetCrossRefGoogle Scholar
  24. Ziolkowski, J. M. (2011). De laude scriptorum manualium and De laude editorum: From Script to Print, From Print to Bytes. Ars edendi Lecture Series, 1, 25–58.Google Scholar

Copyright information

© The Author(s) 2020

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Mario Alemi
    • 1
  1. 1.Elegans FoundationLondonUK

Personalised recommendations