Skip to main content

Machine Intelligence: Blessing or Curse? It Depends on Us!

  • Chapter
  • First Online:
Towards Digital Enlightenment

Abstract

Artificial Intelligence (AI) can help us in many ways. Particularly when combined with robotics, AI can make our everyday life more comfortable (e.g. clean our home). It can perform hard, dangerous and boring work for us. It can help us to save lives and cope with disasters more successfully. It can support patients and elderly people. It can support us in our everyday activities, and it can make our lives more interesting. I believe that most of us would like to benefit from these unprecedented opportunities. So far, however, any technology came along with side effects and risks. As I will show, people may lose self-determination and democracy, companies may lose control, and nations may lose their sovereignty, if we do not pay attention. In the following, I describe a worst-case and a best-case scenario to illustrate that our society is at a crossroads. It is crucial now to take the right path.

This article was written by Dirk Helbing for Deutsche Telekom’s Digital Responsibility initiative and first published under the URL https://www.telekom.com/en/company/digital-responsibility/details/machine-intelligence--blessing-or-curse--it-depends-on-us--429070

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 29.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 37.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    J. Brockman (ed.) What to Think About Machines that Think (Harper Perennial, 2015).

  2. 2.

    See http://www.computerworld.com/article/2901679/steve-wozniak-on-ai-will-we-be-pets-or-mere-ants-to-besquashed-our-robot-overlords.html, accessed on January 24, 2016.

  3. 3.

    See https://www.tensorflow.org/, accessed on January 24, 2016.

  4. 4.

    See http://foreignpolicy.com/2014/07/29/the-social-laboratory/ and http://www.internationalinnovation.com/predicting-how-people-think-and-behave/, accessed on January 24, 2016.

  5. 5.

    See, for example, http://www.wsj.com/articles/furor-erupts-over-facebook-experiment-on-users-1404085840 and http://www.pnas.org/content/111/24/8788.full, accessed on January 24, 2016.

  6. 6.

    E. Pariser, Filter Bubble: What the Internet Is Hiding from You (Penguin, 2011).

  7. 7.

    See https://www.youtube.com/watch?v=KlWeuK46_nA and https://www.youtube.com/watch?v=pplhyw-vEWg, accessed on January 24, 2016.

  8. 8.

    http://www.spektrum.de/news/big-nudging-zur-problemloesung-wenig-geeignet/1375930, accessed on January 24, 2016.

  9. 9.

    http://www.theguardian.com/commentisfree/2011/jul/19/nudge-is-not-enough-behaviour-change, accessed on January 24, 2016.

  10. 10.

    http://www.computerworld.com/article/2990203/security/aclu-orwellian-citizen-score-chinas-credit-score-system-isa-warning-for-americans.html, accessed on January 24, 2016.

  11. 11.

    Moreover, as the Stanford Prison Experiment has shown, any system that creates too much difference in power between those who decide and those who have to obey will sooner or later turn bad and get out of control.

  12. 12.

    http://www.pnas.org/content/112/33/E4512.abstract, accessed on January 25, 2016.

  13. 13.

    https://netzpolitik.org/2016/un-sonderberichterstatter-kritisieren-frankreichs-flaechendeckende-ueberwachung/ and http://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=16966&LangID=E, accessed on January 25, 2016.

  14. 14.

    http://www.focus.de/politik/ausland/kontaktverbot-und-umsiedlung-terror-notstand-geheimer-notfallplan-koennteorban-zu-ungeahnter-macht-verhelfen_id_5234034.html, accessed on January 25, 2016.

  15. 15.

    D. Helbing and E. Pournaras, Build Digital Democracy, Nature 527, 33–34 (2015): http://www.nature.com/news/society-build-digital-democracy-1.18690

  16. 16.

    H.H. Nax and A.B. Schorr, Democracy-growth dynamics for richer and poorer countries, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2698287, accessed on January 24, 2016.

  17. 17.

    http://www.zeit.de/2015/25/singapur-image-innovation-unterwelt, accessed January 24, 2016.

  18. 18.

    http://m.spiegel.de/wirtschaft/a-1072576.html accessed on January 24, 2016 and http://www.forbes.com/forbes/welcome/#version:realtime on January 18, 2016.

  19. 19.

    http://futureoflife.org/open-letter-autonomous-weapons/ accessed on January 24, 2016.

  20. 20.

    D. Helbing and E. Pournaras, Build Digital Democracy, Nature 527, 33–34 (2015): http://www.nature.com/news/society-build-digital-democracy-1.18690 http://futurict.blogspot.ch/2014/09/creating-making-planetary-nervous.html http://futurict.blogspot.ch/2015/08/smart-data-running-internet-of-things.html http://futurict.blogspot.ch/2016/01/nervousnet-towards-open-and.html all accessed January 24, 2016.

  21. 21.

    For a detailed discussion see: D. Helbing, Societal, Economic, Ethical and Legal Challenges of the Digital Revolution: From Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies, Jusletter IT (2015), see http://papers.ssrn.com/soL3/papers.cfm?abstract_id=2594352; D. Helbing, B.S. Frey, G. Gigerenzer, E. Hafen, M. Hagner, Y. Hofstetter, J. van den Hoven, R.V. Zicari and A. Zwitter, Digitale Demokratie statt Datendiktatur, Spektrum der Wissenschaft 1/2016, see http://www.spektrum.de/pdf/digital-manifest/1376682

Further Reading

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dirk Helbing .

Editor information

Editors and Affiliations

Appendix: Some Common Pitfalls of Data-Driven Technologies

Appendix: Some Common Pitfalls of Data-Driven Technologies

In the past couple of years, the concept of Big-Data-driven and Artificial-Intelligence-based Smart Nations has spread around the globe. Without any doubt, these technologies offer interesting potentials to improve political decision-making and the state of the world. However, there are also a number of issues that need to be consideredFootnote 21:

4.1.1 Big Data Analytics

  • In classification problems, errors of first and second kind will occur, which implies unfairness, if decisions cannot be challenged and corrected. Current algorithms to identify terrorists are actually quite bad. They produce too long lists of suspects, and “one does not anymore see the trees for the forest.”

  • Using more data is not necessarily better: it may lead to over-fitting. In large datasets there are always some patterns and correlations by coincidence. In many cases, however, these patterns are meaningless, or they don’t imply causality. This might lead to wrong conclusions, if statistical significance and causality are not ensured (which is often the case today).

  • Some data-driven findings may lead to decisions that discriminate people and, thereby, violate constitution or law. Suppose we let people pay different health insurance rates dependent on what they eat. Then, we will for sure end up with different rates for women and men, for Christians, Muslims, and Jews. Such implicit discrimination is to be avoided, but common Big Data methods don’t take care of such issues.

4.1.2 Artificial Intelligence (AI)

Such systems can handle huge amounts of information, but:

  • errors may still occur due to relevance, inconsistency or incompleteness of information, ambiguity, context-dependence, etc.

  • the goal function may be specified in an improper way, and by modification of the goal function one will often get completely different results as a consequence of “parameter sensitivity”; this makes results subjective, i.e. dependent on the person who controls the AI system.

  • if AI systems are not programmed as tools, but able to learn and evolve, they may start to take unpredictable decisions and behave maliciously.

  • if people are involved in defining the training data, they may intentionally or unintentionally introduce biases that are not accounted for, as we currently lack suitable institutional checks and balances regarding such training; if people are not directly involved in selecting the training data, then machine intelligence may run into similar problems as we know them from children that have not received proper moral education or coaching by adults.

4.1.3 Big Nudging

“Big nudging” uses Big Data of a population, AI, and methods from behavioral economics (such as “nudging”) to manipulate people in their decision-making and behaviors.

  • These systems can be used to let people make stupid mistakes (e.g. spend their money on things they don’t need, undermine the security of IT systems, etc.).

  • They can be used to manipulate public opinion and democratic elections by means of an almost unnoticeable kind of propaganda and censorship, employing principles from attention economics.

  • They amplify the power of those who are allowed to use the system to an extent that is hardly controllable. For example, they can be used for a digital power grab, i.e. to establish and/or stabilize autocratic regimes. These can exploit the data asymmetry to weaken the rule of law or democratic arrangements.

Altogether, the problem of the above three approaches is that their validity is over-rated. They give very few people an extreme amount of power, but are very hard to control. In principle, they can be misused as a “weapon” against the own population. Using “big methods” implies the likelihood of making big mistakes. It’s just a matter of time until they will happen.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Helbing, D. (2019). Machine Intelligence: Blessing or Curse? It Depends on Us!. In: Helbing, D. (eds) Towards Digital Enlightenment. Springer, Cham. https://doi.org/10.1007/978-3-319-90869-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-90869-4_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-90868-7

  • Online ISBN: 978-3-319-90869-4

  • eBook Packages: Social SciencesSocial Sciences (R0)

Publish with us

Policies and ethics