Advertisement

Interperforming in AI: question of ‘natural’ in machine learning and recurrent neural networks

  • Tolga YalurEmail author
Student Section
  • 586 Downloads

Abstract

This article offers a critical inquiry of contemporary neural network models as an instance of machine learning, from an interdisciplinary perspective of AI studies and performativity. It shows the limits on the architecture of these network systems due to the misemployment of ‘natural’ performance, and it offers ‘context’ as a variable from a performative approach, instead of a constant. The article begins with a brief review of machine learning-based natural language processing systems and continues with a concentration on the relevant model of recurrent neural networks, which is applied in most commercial research such as Facebook AI Research. It demonstrates that the logic of performativity is not brought into account in all recurrent nets, which is an integral part of human performance and languaging, and it argues that recurrent network models, in particular, fail to grasp human performativity. This logic works similarly to the theory of performativity articulated by Jacques Derrida in his critique of John L. Austin’s concept of the performative. Applying Jacques Derrida’s work on performativity, and linguistic traces as spatially organized entities that allow for this notion of performance, the article argues that recurrent nets fall into the trap of taking ‘context’ as a constant, of treating human performance as a ‘natural’ fix to be encoded, instead of performative. Lastly, the article applies its proposal more concretely to the case of Facebook AI Research’s Alice and Bob.

Keywords

Performativity Machine learning Natural language processing Recurrent neural networks Derrida Facebook 

Notes

Acknowledgements

I would like to express my gratitude to my beloved one, who both visibly and invisibly interperformed with me in numerous spaces before, during and after the development of this manuscript. I should also thank my professor Denise Albanese (George Mason University) for her invaluable help in the process of initial revisions of the manuscript.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial or non-profit sectors.

References

  1. Allen JF (2006) Natural language processing. Encyclopedia of cognitive scienceGoogle Scholar
  2. Austin JL (1962) How to do things with words. The William James lectures delivered at Harvard University in 1955. Clarendon Press, OxfordGoogle Scholar
  3. Bennett IM, Babu BR, Morkhandikar K, Gururaj P (2003) US Patent no. 6,665,640. US Patent and Trademark Office, Washington, DCGoogle Scholar
  4. Chowdhury GG (2003) Natural language processing. Ann Rev Inf Sci Technol 37(1):51–89MathSciNetCrossRefGoogle Scholar
  5. Conneau A, Schwenk H, Barrault L, Lecun Y (2016) Very deep convolutional networks for natural language processing. arXiv preprintGoogle Scholar
  6. Conneau A, Kiela D, Schwenk H, Barrault L, Bordes A (2017) Supervised learning of universal sentence representations from natural language inference data. arXiv preprint. arXiv:1705.02364
  7. Danaher J (2018) Toward an ethics of ai assistants: an initial framework. Philos Technol 31(4):629–653CrossRefGoogle Scholar
  8. Derrida J (1988) Signature event context. Limited Inc. Northwestern University Press, EvanstonGoogle Scholar
  9. Gao M, Shi G, Li S (2018) Online prediction of ship behavior with automatic identification system sensor data using bidirectional long short-term memory recurrent neural network. Sensors 18:4211.  https://doi.org/10.3390/s18124211 CrossRefGoogle Scholar
  10. Kelly K, IBM (2018) What’s next for AI? Q&A with the co-founder of Wired Kevin Kelly. IBM Blog. https://ibm.com/watson/advantage-reports/future-of-artificial-intelligence/kevin-kelly.html
  11. Leviathan Y, Matias Y (2018) Google duplex: An ai system for accomplishing real-world tasks over the phone. Google AI Blog. https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html
  12. Lewis M, Yarats D, Dauphin YN, Parikh D, Batra D (2017) Deal or no deal? Training AI bots to negotiate. Facebook Code. https://code.fb.com/ml-applications/deal-or-no-deal-training-ai-bots-to-negotiate/
  13. Michaely AH, Zhang X, Simko G, Parada C, Aleksic P (2017) Keyword spotting for Google assistant using contextual speech recognition. In: Automatic speech recognition and understanding workshop (ASRU), 2017 IEEE. IEEE, pp 272–278Google Scholar
  14. Mikolov T, Karafiát LM, Burget JC, Khudanpur S (2010) Recurrent neural network based language model. In: Proceedings of interspeech, vol 2, p 3Google Scholar
  15. Oord AVD, Li Y, Babuschkin I, Simonyan K, Vinyals O, Kavukcuoglu K, Casagrande N (2017) Parallel WaveNet: fast high-fidelity speech synthesis. arXiv preprint. arXiv:1711.10433
  16. Russell S, Norvig P (2016) Artificial intelligence: a modern approach (global 3rd edition). Pearson, EssexzbMATHGoogle Scholar
  17. Shannon CE, Weaver W (1949) The mathematical theory of communication. Urbana, ILGoogle Scholar
  18. Tang D, Qin B, Liu T (2015) Document modeling with gated recurrent neural network for sentiment classification. In: Proceedings of conference of empirical methods natural language processing, pp 1422–1432Google Scholar
  19. Turing AM (1950) Mind. Mind 59(236):433–460MathSciNetCrossRefGoogle Scholar
  20. Yang Z, Zhang S, Urbanek J, Feng W, Miller AH, Szlam A, Weston J (2017) Mastering the Dungeon: grounded language learning by mechanical Turker Descent. arXiv preprint. arXiv:1711.07950

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Cultural StudiesGeorge Mason UniversityFairfaxUSA

Personalised recommendations