Advertisement

Blended Automation: The Language-Game of Psychoanalytic Automatism and Cybernetic Automata

  • Vassilis GalanosEmail author
Chapter
Part of the Springer Series in Cognitive and Neural Systems book series (SSCNS, volume 12)

Abstract

Situated within the context of the emerging concept of blended cognition between human and artificial entities, the present brief paper suggest that emphasis should be placed on the concept of automation in humans and (artificially intelligent) machines in order to understand clearly how unclear the opposition between the two categories is. After a short introduction on the relevance of artificial intelligence in various levels of the social fabric, such as policymaking and business strategies, and a discussion on the ill-defined conceptualization of several related terms such as “artificial,” “intelligence,” “consciousness,” the paper investigates “automation,” “automatism,” “automaton,” and “autism,” from psychoanalytical, cybernetic, control systems theoretical, and psychological perspectives, in the context of a Wittgensteinian language-game, drawing comparisons which in turn result to a number of remarks on the relationship between human and artificial agents. Firstly, it appears that human consciousness is deeply associated with non-automatic behavior while its unconsciousness is related to automatic expressions, whereas machine behavior is deeply associated with automatic function with unconsciousness being its only form of expression. Suggesting, however, a flatter ontology based on the concepts of infosphere, treating both humans and machines as inforgs, it is recommended that we think about the unexplained part left in the model of the following quadrant: (a) human non-automatic conscious, (b) human automatic unconscious, (c) machine automatic “conscious,” and (d) X. The paper concludes that by borrowing the algebraic method of cross-multiplication, multiplying (a) to (c) and dividing the result by (b), that is, investigating empirically the relationship between everyday interactions of conscious humans and unconscious machines, and then analyzing the results according to topics concerned with unconscious human behaviors will help creating a rather clearer view of what constitutes X, that is, what is actually feared in the expression of erratic machine behavior sometimes mistakenly understood as conscious.

Keywords

Artificial intelligence Blended automation Blended cognition Cybernetics Philosophy of information Psychoanalysis 

References

  1. Bargh JA, Chartrand TL (1999) The unbearable automaticity of being. Am Psychol 54(7):462–479CrossRefGoogle Scholar
  2. Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, OxfordGoogle Scholar
  3. Deutsch T, Lang R, Pratl G, Brainin E, Teicher S (2006, July) Applying psychoanalytic and neuro-scientific models to automation. In 2nd IET international conference on intelligent environments, 2006. IE 06, vol 1, pp 111–118. IETGoogle Scholar
  4. Dietrich D, Bruckner D, Zucker G, Müller B, Tmej A (2009, September) Psychoanalytical model for automation and robotics. In AFRICON, 2009. AFRICON’09. 1–8, IEEEGoogle Scholar
  5. European Parliament (2016, October) European civil law rules for robotics: study for the Juri Committee. Directorate-General for Internal Policies. Policy Department C. Citizens’ Rights and Constitutional Affairs. http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf. Accessed 10 Dec 2016
  6. European Parliament. Committee on Legal Affairs (2017a) Report with recommendations to the commission on civil law rules on robotics. 27 January 2017. http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+REPORT+A8-2017-0005+0+DOC+PDF+V0//EN. Accessed 14 May 2017
  7. European Parliament. Committee on Legal Affairs (2017b) Resolution on civil law rules on robotics. 16 February 2017. http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+TA+P8-TA-2017-0051+0+DOC+PDF+V0//EN. Accessed 14 May 2017
  8. Executive Office for the President (2016, October) Preparing for the future of artificial intelligence. National Science and Technology Council Committee on Technology. [online report]. https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf. Accessed 24 Apr 2018
  9. Floridi L (2011a) The philosophy of information. Oxford University Press, OxfordCrossRefGoogle Scholar
  10. Floridi L (2011b) The informational nature of personal identity. Mind Mach 21(4):549–566CrossRefGoogle Scholar
  11. Floridi L (2012) Turing’s three philosophical lessons and the philosophy of information. Philos Trans R Soc A Math Phys Eng Sci 370(1971):3536–3542CrossRefGoogle Scholar
  12. Floridi L (2014) The fourth revolution: how the infosphere is reshaping human reality. Oxford University Press, OxfordGoogle Scholar
  13. Floridi L (2015) Singularitarians, aitheists, and why the problem with artificial intelligence is H.A.L. (Humanity At Large), not HAL. In Sullins J (ed) Philosophy and computers 14(2):8–11Google Scholar
  14. Freud S, Strachey J, Gay P (1933) The dissection of the psychical personality. Introductory Lectures on Psycho-Analysis 71–100Google Scholar
  15. Galanos V (2018). Artificial intelligence does not exist: lessons from shared cognition and the opposition to the nature/nurture divide. In: Kreps D et al (eds) This changes everything – ICT and climate change: what can we do? HCC13 2018, IFIP AICT 537. Springer Nature Switzerland, Switzerland AG.  https://doi.org/10.1007/978-3-319-99605-9_27 CrossRefGoogle Scholar
  16. Grand View Research (2017) Artificial intelligence market analysis by solution (hardware, software, services), by technology (deep learning, machine learning, natural language processing, machine vision), by end-use, by region, and segment forecasts, 2014–2025. [Online]. http://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market. Accessed 5 Nov 2017
  17. Hazewinkel M (ed) (2013) Encyclopaedia of mathematics: volume 6: subject index—author index. Springer Science & Business Media, DordrechtGoogle Scholar
  18. House of Commons. Science and Technology Committee (2016) Robotics and artificial intelligence: fifth report of session 2016-17. [online report]. http://www.publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/145.pdf. Accessed 2 Jan 2017
  19. House of Lords. Select Committee on Artificial Intelligence (2018) AI in the UK: ready, willing, and able? Report of session 2017–19. 16 April 2018. The Authority of the House of Lords. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf. Accessed 16 Apr 2018
  20. International Data Corporation (2017, April) Worldwide spending on cognitive and artificial intelligence systems forecast to reach $12.5 Billion This Year, According to New IDC Spending Guide. [Online]. https://www.idc.com/getdoc.jsp?containerId=prUS42439617. Accessed 5 Nov 2017
  21. Johnston J (2008) The allure of machinic life: cybernetics, artificial life, and the new AI. MIT Press, Cambridge, MACrossRefGoogle Scholar
  22. Khrennikov AY (2010) Toward psycho-robots. Paladyn J Behav Robot 1(2):99–108Google Scholar
  23. Liu LH (2010) The Freudian robot: digital media and the future of the unconscious. University of Chicago Press, ChicagoGoogle Scholar
  24. Moors A, De Houwer J (2006) Automaticity: a theoretical and conceptual analysis. Psychol Bull 132(2):297–326CrossRefGoogle Scholar
  25. Müller VC, Bostrom N (2016) Future progress in artificial intelligence: a survey of expert opinion. In: Müller VC (ed) Fundamental issues of artificial intelligence. Synthese Library. Springer, BerlinGoogle Scholar
  26. Rapaport D (1951) The conceptual model of psychoanalysis. J Pers 20:56–81CrossRefGoogle Scholar
  27. Shiffrin RM, Dumais ST (1981) The development of automatism. In: Anderson JR (ed) Cognitive skills and their acquisition. Erlbaum, Hillshade, pp 111–140Google Scholar
  28. Singer P (2011) A utilitarian defense of animal liberation. In: Pojman P (ed) Food ethics. Wadsworth, Boston, pp 21–30Google Scholar
  29. Sloman A (1993) The mind as a control system. R Inst Philos Suppl 34:69–110CrossRefGoogle Scholar
  30. Sloman A (2009) Machines in the ghost. Simul Mind Tech Neuropsychoanalytical Approach 124–148Google Scholar
  31. Vallverdú J (2017) The emotional nature of post-cognitive singularities. In: Callaghan V et al (eds) The technological singularity, the frontiers collection. Springer, Heidelberg, pp 193–208.  https://doi.org/10.1007/978-3-662-54033-6_11 CrossRefGoogle Scholar
  32. Whitehouse D, Harris JC (1984) Hyperlexia in infantile autism. J Autism Dev Disord 14(3):281–289CrossRefGoogle Scholar
  33. Wing L (1993) The definition and prevalence of autism: a review. Eur Child Adolesc Psychiatry 2(1):61–74CrossRefGoogle Scholar
  34. Wittgenstein L (1967) Philosophical investigations (G.E.M. Anscombe, trans.). Basil Blackwell, OxfordGoogle Scholar

Copyright information

© Springer Nature Switzerland AG (outside the USA) 2019

Authors and Affiliations

  1. 1.Science, Technology and Innovation Studies Subject Group, School of Social and Political ScienceUniversity of EdinburghEdinburghUK

Personalised recommendations