A Recommender System for Complex Real-World Applications with Nonlinear Dependencies and Knowledge Graph Context
Most latent feature methods for recommender systems learn to encode user preferences and item characteristics based on past user-item interactions. While such approaches work well for standalone items (e.g., books, movies), they are not as well suited for dealing with composite systems. For example, in the context of industrial purchasing systems for engineering solutions, items can no longer be considered standalone. Thus, latent representation needs to encode the functionality and technical features of the engineering solutions that result from combining the individual components. To capture these dependencies, expressive and context-aware recommender systems are required. In this paper, we propose NECTR, a novel recommender system based on two components: a tensor factorization model and an autoencoder-like neural network. In the tensor factorization component, context information of the items is structured in a multi-relational knowledge base encoded as a tensor and latent representations of items are extracted via tensor factorization. Simultaneously, an autoencoder-like component captures the non-linear interactions among configured items. We couple both components such that our model can be trained end-to-end. To demonstrate the real-world applicability of NECTR, we conduct extensive experiments on an industrial dataset concerned with automation solutions. Based on the results, we find that NECTR outperforms state-of-the-art methods by approximately 50% with respect to a set of standard performance metrics.
- 1.Bell, R.M., Koren, Y., Volinsky, C.: The Bellkor 2008 solution to the Netflix prize. Statistics Research Department at AT&T Research 1 (2008)Google Scholar
- 2.Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. ArXiv e-prints, June 2012Google Scholar
- 3.Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, pp. 2787–2795 (2013)Google Scholar
- 5.Choi, S.M., Han, Y.S.: A content recommendation system based on category correlations. In: 2010 Fifth International Multi-Conference on Computing in the Global Information Technology (ICCGI), pp. 66–70. IEEE (2010)Google Scholar
- 6.Dong, X., Yu, L., Wu, Z., Sun, Y., Yuan, L., Zhang, F.: A hybrid collaborative filtering model with deep structure for recommender systems. In: AAAI, pp. 1309–1315 (2017)Google Scholar
- 7.Hildebrandt, M., Sunder, S.S., Mogoreanu, S., Thon, I., Tresp, V., Runkler, T.: Configuration of industrial automation solutions using multi-relational recommender systems. In: Brefeld, U., et al. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11053, pp. 271–287. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-10997-4_17CrossRefGoogle Scholar
- 9.Krizhevsky, A.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25 (2012)Google Scholar
- 12.Meng, Q., Catchpoole, D., Skillicorn, D., Kennedy, P.J.: Relational autoencoder for feature extraction. ArXiv e-prints, February 2018Google Scholar
- 13.Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning on multi-relational data. In: ICML, vol. 11, pp. 809–816 (2011)Google Scholar
- 14.Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contracting auto-encoders: explicit invariance during feature extraction. In: Proceedings of the Twenty-Eight International Conference on Machine Learning (ICML 2011) (2011)Google Scholar
- 15.Strub, F., Gaudel, R., Mary, J.: Hybrid recommender system based on autoencoders. In: Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pp. 11–16. ACM (2016)Google Scholar
- 16.Weston, J., Chopra, S., Adams, K.: TagSpace: semantic embeddings from hashtags. In: Empirical Methods in Natural Language Processing (2014)Google Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.