Abstract
The aim of the paper is to present a part of an architecture realized by Huawei, that propose the first Christmas tree endowed with artificial intelligence. Its ability is to identify facial expressions from images acquired by a mobile application and then recognize the sentiment of the subject. So, basing on the prevailing sentiment the tree lights up itself with different special effects. Our task in the project was testing the performances of the neural networks employed in the mobile application for the recognition of facial emotion. We used a convolutional neural networks model-based and created a purposely dedicated dataset of images for testing the recognition performances.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Bekele, M.K., Pierdicca, R., Frontoni, E., Malinverni, E.S., Gain, J.: A survey of augmented, virtual, and mixed reality for cultural heritage. J. Comput. Cult. Heritage (JOCCH) 11(2), 7 (2018)
Cai, G., Xia, B.: Convolutional neural networks for multimedia sentiment analysis. In: Li, J., Ji, H., Zhao, D., Feng, Y. (eds.) NLPCC 2015. LNCS (LNAI), vol. 9362, pp. 159–167. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25207-0_14
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Ghimire, D., Jeong, S., Lee, J., Park, S.H.: Facial expression recognition based on local region specific features and support vector machines. Multimedia Tools Appl. 76(6), 7803–7821 (2017)
Ghimire, D., Lee, J.: Geometric feature-based facial expression recognition in image sequences using multi-class adaboost and support vector machines. Sensors 13(6), 7714–7734 (2013)
Happy, S., George, A., Routray, A.: A real time facial expression classification system using local binary patterns. In: 2012 4th International Conference on Intelligent Human Computer Interaction (IHCI), pp. 1–5. IEEE (2012)
Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)
Ko, B.: A brief review of facial emotion recognition based on visual information. Sensors 18(2), 401 (2018)
LeCun, Y., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)
Levi, G., Hassner, T.: Emotion recognition in the wild via convolutional neural networks and mapped binary patterns. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 503–510. ACM (2015)
Liciotti, D., Paolanti, M., Frontoni, E., Zingaretti, P.: People detection and tracking from an RGB-D camera in top-view configuration: review of challenges and applications. In: Battiato, S., Farinella, G.M., Leo, M., Gallo, G. (eds.) ICIAP 2017. LNCS, vol. 10590, pp. 207–218. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70742-6_20
Liciotti, D., Paolanti, M., Pietrini, R., Frontoni, E., Zingaretti, P.: Convolutional networks for semantic heads segmentation using top-view depth data in crowded environment. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 1384–1389. IEEE (2018)
Naspetti, S., Pierdicca, R., Mandolesi, S., Paolanti, M., Frontoni, E., Zanoli, R.: Automatic analysis of eye-tracking data for augmented reality applications: a prospective outlook. In: De Paolis, L.T., Mongelli, A. (eds.) AVR 2016. LNCS, vol. 9769, pp. 217–230. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40651-0_17
Paolanti, M., Frontoni, E., Mancini, A., Pierdicca, R., Zingaretti, P.: Automatic classification for anti mixup events in advanced manufacturing system. In: ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, p. V009T07A061. American Society of Mechanical Engineers (2015)
Paolanti, M., Kaiser, C., Schallner, R., Frontoni, E., Zingaretti, P.: Visual and textual sentiment analysis of brand-related social media pictures using deep convolutional neural networks. In: Battiato, S., Gallo, G., Schettini, R., Stanco, F. (eds.) ICIAP 2017. LNCS, vol. 10484, pp. 402–413. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68560-1_36
Paolanti, M., Liciotti, D., Pietrini, R., Mancini, A., Frontoni, E.: Modelling and forecasting customer navigation in intelligent retail environments. J. Intell. Robot. Syst. 91(2), 165–180 (2018)
Paolanti, M., Romeo, L., Felicetti, A., Mancini, A., Frontoni, E., Loncarski, J.: Machine learning approach for predictive maintenance in industry 4.0. In: 2018 14th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA), pp. 1–6. IEEE (2018)
Paolanti, M., Romeo, L., Martini, M., Mancini, A., Frontoni, E., Zingaretti, P.: Robotic retail surveying by deep learning visual and textual data. Robot. Auton. Syst. 118, 179–188 (2019)
Paolanti, M., Sturari, M., Mancini, A., Zingaretti, P., Frontoni, E.: Mobile robot for retail surveying and inventory using visual and textual analysis of monocular pictures based on deep learning. In: 2017 European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2017)
Pierdicca, R., Malinverni, E., Piccinini, F., Paolanti, M., Felicetti, A., Zingaretti, P.: Deep convolutional neural network for automatic detection of damaged photovoltaic cells. In: International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, vol. 42, no. 2 (2018)
Pierdicca, R., Frontoni, E., Pollini, R., Trani, M., Verdini, L.: The use of augmented reality glasses for the application in Industry 4.0. In: De Paolis, L.T., Bourdot, P., Mongelli, A. (eds.) AVR 2017. LNCS, vol. 10324, pp. 389–401. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60922-5_30
Pierdicca, R., Paolanti, M., Frontoni, E.: eTourism: ICT and its role for tourism management. J. Hosp. Tour. Technol. 10(1), 90–106 (2019)
Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Sturari, M., Paolanti, M., Frontoni, E., Mancini, A., Zingaretti, P.: Robotic platform for deep change detection for rail safety and security. In: 2017 European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2017)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
La Porta, S., Marconi, F., Lazzini, I. (2019). Collecting Retail Data Using a Deep Learning Identification Experience. In: Cristani, M., Prati, A., Lanz, O., Messelodi, S., Sebe, N. (eds) New Trends in Image Analysis and Processing – ICIAP 2019. ICIAP 2019. Lecture Notes in Computer Science(), vol 11808. Springer, Cham. https://doi.org/10.1007/978-3-030-30754-7_28
Download citation
DOI: https://doi.org/10.1007/978-3-030-30754-7_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30753-0
Online ISBN: 978-3-030-30754-7
eBook Packages: Computer ScienceComputer Science (R0)