Abstract
Group activity recognition refers to the process of comprehending the activity performed by multi-person in a video. However, most methods need predefined individual labels during training or testing, which is impractical and lacks intelligence. Moreover, they only consider visual features and ignore corresponding semantic information. To address these issues, a Semantic Content Guiding Teacher-Student (SCGTS) network is developed. SCGTS depends neither on predefined individual labels nor on any detection methods. It utilizes a large-scale language model as the teacher network to extract content features from textual descriptions of labels. The semantic content features are then used to supervise the training of the baseline network which serves as the student network. In this way, the student network is enforced to mimic the teacher network to extract visual features with semantic information. Experiments on 2 challenging benchmarks, including Volleyball and NBA, demonstrate SCGTS outperforms the baseline network and achieves the leading performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Fan, L., Huang, W.: Identification of common molecular subsequences. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 6016–6025 (2001)
Girdhar, R., Carreira, J.: Video action transformer network. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 244–253 (2019)
Kwon, H., Kim, M., Kwak, S., Cho, M.: Motionsqueeze: neural motion feature learning for video understanding. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 345–362. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_21
Piergiovanni, A.J., Ryoo, M.S.: Representation flow for action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 9945–9953 (2019)
Wang, X., Girshick, R., Gupta, A.: Non-local neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)
Ehsanpour, M., Abedin, A., Saleh, F., Shi, J., Reid, I., Rezatofighi, H.: Joint learning of social groups, individuals action and sub-group activities in videos. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 177–195. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_11
Gavrilyuk, K., Sanford, R., Javan, M.: Actor-transformers for group activity recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 839–848 (2020)
Hu, G., Cui, B., He, Y.: Progressive relation learning for group activity recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 980–989 (2020)
Li, S., Cao, Q.: GroupFormer: group activity recognition with clustered spatial-temporal transformer. In: IEEE International Conference on Computer Vision, pp. 13668–13677 (2021)
Yuan, H., Ni, D.: Learning visual context for group activity recognition. In: AAAI Conference on Artificial Intelligence, vol.35, pp. 3261–3269 (2021)
Wu, J., Wang, L.: Learning actor relation graphs for group activity recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 9964–9974 (2019)
Vaswani, A., Shazeer, N., Parmar, N.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Li, X., Choo Chuah, M.: SBGAR: semantics based group activity recognition. In: IEEE International Conference on Computer Vision, pp. 2876–2885 (2017)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)
Zhang, P., Tang, Y.: Fast collective activity recognition under weak supervision. IEEE Trans. Image Process. 29, 29–43 (2019)
Yan, R., Xie, L., Tang, J., Shu, X., Tian, Q.: Social adaptive module for weakly-supervised group activity recognition. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12353, pp. 208–224. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58598-3_13
Kim, D., Lee, J.: Detector-free weakly supervised group activity recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 20083–20093 (2022)
Devlin, J., Chang, M.W.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 (2018)
Wu, L., Lang, X., Xiang, Y.: Active spatial positions based hierarchical relation inference for group activity recognition. IEEE Trans. Circuits Syst. Video Technol. 33(6), 2839–2851 (2023). https://doi.org/10.1109/TCSVT.2022.3228731
Wu, L., Wang, Q., Jian, M.: A comprehensive review of group activity recognition in videos. Int. J. Autom. Comput. 18, 334–350 (2021)
Lan, T., Wang, Y., Yang, W.: Discriminative latent models for recognizing contextual group activities. IEEE Trans. Pattern Anal. Mach. Intell. 34, 1549–1562 (2011)
Hajimirsadeghi, H., Yan, W., Vahdat, A.: Visual recognition by counting instances: a multi-instance cardinality potential kernel. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2596–2605 (2015)
Shu, T., Xie, D., Rothrock, B.: Joint inference of groups, events and human roles in aerial videos. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4576–4584 (2015)
Amer, M.R., Xie, D., Zhao, M., Todorovic, S., Zhu, S.-C.: Cost-sensitive top-down/bottom-up inference for multiscale activity recognition. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7575, pp. 187–200. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33765-9_14
Shu, T., Todorovic, S., Zhu, S.C.: CERN: confidence-energy recurrent network for group activity recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5523–5531 (2017)
Wang, M., Ni, B., Yang, X.: Recurrent modeling of interaction context for collective activity recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3048–3056 (2017)
Yan, R., Xie, L., Tang, J.: HiGCIN: hierarchical graph-based cross inference network for group activity recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020)
Yuan, H., Ni, D., Wang, M.: Spatio-temporal dynamic inference network for group activity recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 7476–7485 (2021)
Pramono, R.R.A., Chen, Y.T., Fang, W.H.: Empowering relational network by self-attention augmented conditional random fields for group activity recognition. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 71–90. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_5
Yang, Z., Dai, Z., Yang, Y.: XLNet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv:1503.02531 (2015)
Hao, W., Zhang, Z.: Spatiotemporal distilled dense-connectivity network for video action recognition. Pattern Recogn. 92, 13–23 (2019)
Wu, M.C., Chiu, C.T.: Multi-teacher knowledge distillation for compressed video action recognition based on deep learning. J. Syst. Archit. 103, 101695 (2020). https://doi.org/10.1016/j.sysarc.2019.101695
Bian, C., Feng, W., Wan, L.: Structural knowledge distillation for efficient skeleton-based action recognition. IEEE Trans. Image Process. 30, 2963–2976 (2021)
Tang, Y., Wang, Z., Li, P.: Mining semantics-preserving attention for group activity recognition. In: 26th ACM International Conference on Multimedia, pp. 1283–1291 (2018)
Wang, L., Xiong, Y., Wang, Z.: Temporal segment networks: towards good practices for deep action recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 20–36. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_2
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)
Lin, J., Gan, C., Han, S.: TSM: temporal shift module for efficient video understanding. In: IEEE International Conference on Computer Vision, pp. 7083–9093 (2019)
Lin, Z., Ning, J., Cao, Y.: Video swin transformer. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3202–3211 (2022)
Acknowledgment
This work was supported in part by the National Natural Science Foundation of China under Grant NO. 62236010, 61976010, 62106011, 62106010, 62176011.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Xi, Z., Shi, G., Wu, L., Li, X. (2023). SCGTS: Semantic Content Guiding Teacher-Student Network for Group Activity Recognition. In: Yongtian, W., Lifang, W. (eds) Image and Graphics Technologies and Applications. IGTA 2023. Communications in Computer and Information Science, vol 1910. Springer, Singapore. https://doi.org/10.1007/978-981-99-7549-5_10
Download citation
DOI: https://doi.org/10.1007/978-981-99-7549-5_10
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-7548-8
Online ISBN: 978-981-99-7549-5
eBook Packages: Computer ScienceComputer Science (R0)