Skip to main content

Neural Network Approach for Semantic Coding of Words

  • Conference paper
  • First Online:
Lecture Notes in Computational Intelligence and Decision Making (ISDMCI 2019)

Abstract

This article examines and analyzes the use of the word2vec method for solving semantic coding problems. The task of semantic coding has acquired particular importance with the development of search system. The relevance of such technologies is associated primarily with the ability to search in large-volume databases. Based on the obtained practical results, a prototype of a search system based on the use of selected semantic information for the implementation of relevant search in the database of documents has been developed. Proposed two main scenarios for the implementation of such a search. The training set has been prepared on the basis of documents in the English version of Wikipedia, which includes more than 100,000 original articles. The resulting set was used in the experimental part of the work to test the effectiveness of the developed prototype search system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mikolov T, Chen K, Corrado G, Dean J (2019) Efficient estimation of word representations in vector space. https://arxiv.org/abs/1301.3781. Accessed 14 Apr 2019

  2. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2019) Distributed representations of words and phrases and their compositionality. https://arxiv.org/abs/1310.4546v1. Accessed 14 Apr 2019

  3. Pennington J, Socher R, Manning C (2014) Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp 1532–1543

    Google Scholar 

  4. Lin J, Kolcz A (2012) Large-scale machine learning at twitter. In: Proceedings of the ACM SIGMOD international conference on management of data, Scottsdale, Arizona, USA, pp 793–804

    Google Scholar 

  5. Smola A, Narayanamurthy S (2010) An architecture for parallel topic models. In: Proceedings of the 36th international conference on very large data bases, Singapore, pp 703–710

    Google Scholar 

  6. Ng A et al (2006) Map-reduce for machine learning on multicore. In: Proceedings of advances in neural information processing systems, Vancouver, Canada, pp 281–288

    Google Scholar 

  7. Panda B, Herbach J, Basu S, Bayardo R (2012) MapReduce and its application to massively parallel learning of decision tree ensembles, in scaling up machine learning: parallel and distributed approaches. Cambridge University Press, Cambridge

    Google Scholar 

  8. Crego E, Munoz G, Islam F (2019) Big data and deep learning: big deals or big delusions? http://www.hufngtonpost.com/george-munoz-frank-islamand-ed-crego/big-data-and-deep-learnin_b_3325352.html. Accessed 14 Apr 2019

  9. Bengio Y, Bengio S (2000) Modeling high-dimensional discrete data with multi-layer neural networks. In: Proceedings of advances in neural information processing systems, Vancouver, Canada, vol 12, pp 400–406

    Google Scholar 

  10. Ranzato MA, Boureau YL, LeCun Y (2007) Sparse feature learning for deep belief networks. In: Proceedings of advances in neural information processing systems, Vancouver, Canada, vol 20, pp 1185–1192

    Google Scholar 

  11. Hinton GE, Osindero ES, Teh Y (2006) A fast learning algorithm for deep belief nets. Neural Comput 18:1527–1554

    Article  MathSciNet  Google Scholar 

  12. Hinton G, Salakhutdinov R (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    Article  MathSciNet  Google Scholar 

  13. Hinton GE (2010) A practical guide to training restricted Boltzmann machines. Machine Learning Group, University of Toronto, Technical report 2010-000

    Google Scholar 

  14. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Article  Google Scholar 

  15. Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. In: Scholkopf B, Platt JC, Hoffman T (eds) Advances in neural information processing systems, vol 11. MIT Press, Cambridge, pp 153–160

    Google Scholar 

  16. Bengio Y (2009) Learning deep architectures for AI. Found Trends Mach Learn 2(1):1–127

    Article  Google Scholar 

  17. Bengio Y et al (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828

    Article  Google Scholar 

  18. Erhan D, Bengio Y, Courville A, Manzagol P-A, Vincent P, Bengio S (2010) Why does unsupervised pre-training help deep learning? J Mach Learn Res 11:625–660

    MathSciNet  MATH  Google Scholar 

  19. Deng L, Yu D (2014) Deep learning: methods and applications. Found Trends Signal Process 7(3–4):197–387

    Article  MathSciNet  Google Scholar 

  20. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61(1):85–117

    Article  Google Scholar 

  21. Fischer A, Igel C (2014) Training restricted Boltzmann machines: an introduction. Pattern Recogn 47(1):25–39

    Article  Google Scholar 

  22. Golovko V, Kroschanka A (2016) The nature of unsupervised learning in deep neural networks: a new understanding and novel approach. Opt Mem Neural Netw 3:127–141

    Article  Google Scholar 

  23. Golovko V (2017) Deep learning: an overview and main paradigms. Opt Mem Neural Netw 1:1–17

    Google Scholar 

  24. Hinton G et al (2012) Deep neural network for acoustic modeling in speech recognition. IEEE Signal Process Mag 29:82–97

    Article  Google Scholar 

  25. Golovko V, Egor M, Brich A, Sachenko A (2017) A shallow convolutional neural network for accurate handwritten digits classification. In: Krasnoproshin V, Ablameyko S (eds) Pattern recognition and information processing, communications in computer and information science, vol 673. Springer, Cham, pp 77–85

    Google Scholar 

  26. Krizhevsky A et al (2012) Imagenet classification with deep convolutional neural networks. Proc Adv Neural Inf Process Syst 25:1090–1098

    Google Scholar 

  27. Dahl GE, Yu D, Deng L, Acero A (2012) Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans Audio Speech Lang Process 20(1):30–41

    Article  Google Scholar 

  28. Cirean D, Meler U, Cambardella L, Schmidhuber J (2010) Deep, big, simple neural nets for handwritten digit recognition. Neural Comput 22(12):3207–3220

    Article  Google Scholar 

  29. Zeiler M, Taylor G, Fergus R (2011) Adaptive deconvolutional networks for mid and high level feature learning. Proc IEEE Int Conf Comput Vis 1(2):2018–2025

    Google Scholar 

  30. Dorosh V, Komar M, Sachenko A, Golovko V (2018) Parallel deep neural network for detecting computer attacks in information telecommunication systems. In: Proceedings of the 38th IEEE international conference on electronics and nanotechnology. TUU “Kyiv Polytechnic Institute”, Kyiv, pp 675–679

    Google Scholar 

  31. Komar M, Sachenko A, Golovko V, Dorosh V (2018) Compression of network traffic parameters for detecting cyber attacks based on deep learning. In: Proceedings of the 9th IEEE international conference on dependable systems, services and technologies, Kyiv, Ukraine, pp 44–48

    Google Scholar 

  32. Komar M, Dorosh V, Sachenko A, Hladiy G (2018) Deep neural network for detection of cyber attacks. In: Proceedings of the IEEE first international conference on system analysis and intelligent computing, Kyiv, Ukraine, pp 186–189

    Google Scholar 

  33. Komar M, Golovko V, Sachenko A, Dorosh V, Yakobchuk P (2018) Deep neural network for image recognition based on the caffe framework. In: Proceedings of the IEEE second international conference on data stream mining and processing, Lviv, Ukraine, pp 102–106

    Google Scholar 

  34. Golovko V, Bezobrazov S, Kroshchanka A, Sachenko A, Komar M, Karachka A (2017) Convolutional neural network based solar photovoltaic panel detection in satellite photos. In: Proceedings of the 9th IEEE international conference on intelligent data acquisition and advanced computing systems: technology and applications, Bucharest, Romania, pp 14–19

    Google Scholar 

  35. Golovko V, Kroshchanka A, Bezobrazov S, Komar M, Sachenko A, Novosad O (2018) Development of solar panels detector. In: Proceedings of the IEEE international scientific-practical conference “problems of infocommunications, science and technology”, Kharkiv, Ukraine, pp 761–764

    Google Scholar 

  36. Salakhutdinov R, Mnih A, Hinton G (2007) Restricted Boltzmann machines for collaborative filtering. In: Proceedings 24th international conference on machine learning, Corvalis, USA, pp 791–798

    Google Scholar 

  37. Efrati A (2019) How ‘deep learning’ works at Apple, beyond. https://www.theinformation.com/How-Deep-Learning-Works-at-Apple-Beyond. Accessed 14 Apr 2019

  38. Jones N (2014) Computer science: the learning machines. Nature 505(7482):146–148

    Article  Google Scholar 

  39. Wang Y, Yu D, Ju Y, Acero A (2011) Voice search. In: Tur G, De Mori R (eds) Language understanding: systems for extracting semantic information from speech, chap 5. Wiley, New York

    Google Scholar 

  40. Kirk J (2019) Universities, IBM join forces to build a brain-like computer. http://www.pcworld.com/article/2051501/universities-join-ibm-in-cognitive-computing-researchproject.html. Accessed 14 Apr 2019

  41. Kroshchenko A, Golovko V, Bezobrazov S, Mikhno E, Rubanov V, Krivulets I (2017) The organization of semantic coding of words and search engine on the basis of neural networks. Vesnyk Brest State Tech Univ 5(107):9–12 (in Russian)

    Google Scholar 

  42. Van der Maaten L, Hinton GE (2008) Visualizing high-dimensional data using t-SNE. J Mach Learn Res 9:2579–2605

    MATH  Google Scholar 

  43. Pelevina M, Arefyev N, Biemann C, Panchenko A (2019) Making sense of word embeddings. https://arxiv.org/pdf/1708.03390.pdf. Accessed 14 Apr 2019

  44. Xiong S, Wang X, Duan P, Yu Z, Dahou A (2017) Deep knowledge representation based on compositional semantics for Chinese geography. In: Proceedings of the 9th international conference on agents and artificial intelligence, pp 17–23

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Myroslav Komar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Golovko, V., Kroshchanka, A., Komar, M., Sachenko, A. (2020). Neural Network Approach for Semantic Coding of Words. In: Lytvynenko, V., Babichev, S., Wójcik, W., Vynokurova, O., Vyshemyrskaya, S., Radetskaya, S. (eds) Lecture Notes in Computational Intelligence and Decision Making. ISDMCI 2019. Advances in Intelligent Systems and Computing, vol 1020. Springer, Cham. https://doi.org/10.1007/978-3-030-26474-1_45

Download citation

Publish with us

Policies and ethics