Interperforming in AI: question of ‘natural’ in machine learning and recurrent neural networks

A Correction to this article was published on 18 October 2019

This article has been updated

Abstract

This article offers a critical inquiry of contemporary neural network models as an instance of machine learning, from an interdisciplinary perspective of AI studies and performativity. It shows the limits on the architecture of these network systems due to the misemployment of ‘natural’ performance, and it offers ‘context’ as a variable from a performative approach, instead of a constant. The article begins with a brief review of machine learning-based natural language processing systems and continues with a concentration on the relevant model of recurrent neural networks, which is applied in most commercial research such as Facebook AI Research. It demonstrates that the logic of performativity is not brought into account in all recurrent nets, which is an integral part of human performance and languaging, and it argues that recurrent network models, in particular, fail to grasp human performativity. This logic works similarly to the theory of performativity articulated by Jacques Derrida in his critique of John L. Austin’s concept of the performative. Applying Jacques Derrida’s work on performativity, and linguistic traces as spatially organized entities that allow for this notion of performance, the article argues that recurrent nets fall into the trap of taking ‘context’ as a constant, of treating human performance as a ‘natural’ fix to be encoded, instead of performative. Lastly, the article applies its proposal more concretely to the case of Facebook AI Research’s Alice and Bob.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2

Source: Gao et al.

Fig. 3

Credit: Facebook AI Research

Fig. 4

Credit: Facebook AI Research

Change history

  • 18 October 2019

    The article Interperforming in AI:

Notes

  1. 1.

    FAIR published their early MTD research findings on the Facebook Code Blog—available at https://code.fb.com/ml-applications/deal-or-no-deal-training-ai-bots-to-negotiate/.

References

  1. Allen JF (2006) Natural language processing. Encyclopedia of cognitive science

  2. Austin JL (1962) How to do things with words. The William James lectures delivered at Harvard University in 1955. Clarendon Press, Oxford

    Google Scholar 

  3. Bennett IM, Babu BR, Morkhandikar K, Gururaj P (2003) US Patent no. 6,665,640. US Patent and Trademark Office, Washington, DC

  4. Chowdhury GG (2003) Natural language processing. Ann Rev Inf Sci Technol 37(1):51–89

    MathSciNet  Article  Google Scholar 

  5. Conneau A, Schwenk H, Barrault L, Lecun Y (2016) Very deep convolutional networks for natural language processing. arXiv preprint

  6. Conneau A, Kiela D, Schwenk H, Barrault L, Bordes A (2017) Supervised learning of universal sentence representations from natural language inference data. arXiv preprint. arXiv:1705.02364

  7. Danaher J (2018) Toward an ethics of ai assistants: an initial framework. Philos Technol 31(4):629–653

    Article  Google Scholar 

  8. Derrida J (1988) Signature event context. Limited Inc. Northwestern University Press, Evanston

    Google Scholar 

  9. Gao M, Shi G, Li S (2018) Online prediction of ship behavior with automatic identification system sensor data using bidirectional long short-term memory recurrent neural network. Sensors 18:4211. https://doi.org/10.3390/s18124211

    Article  Google Scholar 

  10. IBM (2018) The new AI innovation equation. IBM Blog. https://ibm.com/watson/advantage-reports/future-of-artificial-intelligence/ai-innovation-equation.html

  11. Kelly K, IBM (2018) What’s next for AI? Q&A with the co-founder of Wired Kevin Kelly. IBM Blog. https://ibm.com/watson/advantage-reports/future-of-artificial-intelligence/kevin-kelly.html

  12. Leviathan Y, Matias Y (2018) Google duplex: An ai system for accomplishing real-world tasks over the phone. Google AI Blog. https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html

  13. Lewis M, Yarats D, Dauphin YN, Parikh D, Batra D (2017) Deal or no deal? Training AI bots to negotiate. Facebook Code. https://code.fb.com/ml-applications/deal-or-no-deal-training-ai-bots-to-negotiate/

  14. Michaely AH, Zhang X, Simko G, Parada C, Aleksic P (2017) Keyword spotting for Google assistant using contextual speech recognition. In: Automatic speech recognition and understanding workshop (ASRU), 2017 IEEE. IEEE, pp 272–278

  15. Mikolov T, Karafiát LM, Burget JC, Khudanpur S (2010) Recurrent neural network based language model. In: Proceedings of interspeech, vol 2, p 3

  16. Oord AVD, Li Y, Babuschkin I, Simonyan K, Vinyals O, Kavukcuoglu K, Casagrande N (2017) Parallel WaveNet: fast high-fidelity speech synthesis. arXiv preprint. arXiv:1711.10433

  17. Russell S, Norvig P (2016) Artificial intelligence: a modern approach (global 3rd edition). Pearson, Essex

    MATH  Google Scholar 

  18. Shannon CE, Weaver W (1949) The mathematical theory of communication. Urbana, IL

  19. Tang D, Qin B, Liu T (2015) Document modeling with gated recurrent neural network for sentiment classification. In: Proceedings of conference of empirical methods natural language processing, pp 1422–1432

  20. Turing AM (1950) Mind. Mind 59(236):433–460

    MathSciNet  Article  Google Scholar 

  21. Yang Z, Zhang S, Urbanek J, Feng W, Miller AH, Szlam A, Weston J (2017) Mastering the Dungeon: grounded language learning by mechanical Turker Descent. arXiv preprint. arXiv:1711.07950

Download references

Acknowledgements

I would like to express my gratitude to my beloved one, who both visibly and invisibly interperformed with me in numerous spaces before, during and after the development of this manuscript. I should also thank my professor Denise Albanese (George Mason University) for her invaluable help in the process of initial revisions of the manuscript.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial or non-profit sectors.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Tolga Yalur.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original version of this article was revised due to a retrospective Open Access Cancellation.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yalur, T. Interperforming in AI: question of ‘natural’ in machine learning and recurrent neural networks. AI & Soc 35, 737–745 (2020). https://doi.org/10.1007/s00146-019-00910-1

Download citation

Keywords

  • Performativity
  • Machine learning
  • Natural language processing
  • Recurrent neural networks
  • Derrida
  • Facebook