Abstract
End-to-end deep neural networks have been widely used in the literature to model 2D correlations in the audio signal. Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements across a wide variety of speech recognition tasks. Especially, CNNs effectively exploit temporal and spectral local correlations to gain translation invariance. However, all CNNs used in existing work assume each channel’s feature map is independent of each other, which may not fully utilize and combine information about input features. Meanwhile, most CNNs in literature use shallow layers may not be deep enough to capture all human speech signal information. In this paper, we propose a novel neural network, denoted as GRCNN-CTC, which integrates group residual convloutional blocks and recurrent layers paired with Connectionist Temporal Classification (CTC) loss. Experimental results show that our proposed GRCNN-CTC achieve 1.11% Word Error Rate (WER) and 0.48% Character Error Rate (CER) improvements on a subset of the LibriSpeech dataset compared to the baseline automatic speech recognition (ASR) system. In addition, our model greatly reduces computational overhead and converges faster, leading to scale up to deeper architecture.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Yu, D., Li, J.: Recent progresses in deep learning based acoustic models. IEEE/CAA J. Autom. Sinica 4(3), 396–409 (2017)
Rao, K., Sak, H., Prabhavalkar, R.: Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer (2018)
Hannun, A., Case, C., Casper, J., et al.: Deep speech: scaling up end-to-end speech recognition. Computer Science (2014)
Amodei, D., Anubhai, R., Battenberg, E., et al.: Deep speech 2: end-to-end speech recognition in english and mandarin. Computer Science (2015)
Chan, W., Jaitly, N., Le, Q.V., et al.: Listen, attend and spell. Computer Science (2015)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18(5–6), 602–610 (2005)
Jozefowicz, R., Zaremba, W., Sutskever, I.: An empirical exploration of recurrent network architectures. In: International Conference on International Conference on Machine Learning, pp. 2342–2350. JMLR.org (2015)
Sak, H., Senior, A., Beaufays, F.: Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: Fifteenth Annual Conference of the International Speech Communication Association (2014)
Graves, A., Mohamed, A., Hinton, G., Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6645–6649. IEEE (2013)
Abdel-Hamid, O., Mohamed, A., Jiang, H., et al.: Convolutional neural networks for speech recognition. IEEE/ACM Trans. Audio, Speech Lang. Process. 22(10), 1533–1545 (2014)
Zhang, Y., Chan, W., Jaitly, N.: Very deep convolutional networks for end-to-end speech recognition. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4845–4849. IEEE (2017)
Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)
Wang, Y., Deng, X., Pu, S., et al.: Residual convolutional CTC networks for automatic speech recognition (2017)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Graves, A., Fernández, S., Gomez, F., et al.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 369–376. ACM (2006)
Sainath, T.N., Vinyals, O., Senior, A., et al.: Convolutional, long short-term memory, fully connected deep neural networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4580–4584. IEEE (2015)
Xie, S., Girshick, R., Dollár, P., et al.: Aggregated residual transformations for deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987–5995. IEEE (2017)
Panayotov, V., Chen, G., Povey, D., et al.: LibriSpeech: an ASR corpus based on public domain audio books. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206–5210. IEEE (2015)
Chen, W., Wang, S., Zhang, X., et al.: EEG-based motion intention recognition via multi-task RNNs. In: Proceedings of the 2018 SIAM International Conference on Data Mining (2018)
Acknowledgements
This work was supported by the Fundamental Research Funds for the Central Universities NS2018057, NJ20160028.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, K., Guan, D., Li, B. (2018). Deep Group Residual Convolutional CTC Networks for Speech Recognition. In: Gan, G., Li, B., Li, X., Wang, S. (eds) Advanced Data Mining and Applications. ADMA 2018. Lecture Notes in Computer Science(), vol 11323. Springer, Cham. https://doi.org/10.1007/978-3-030-05090-0_27
Download citation
DOI: https://doi.org/10.1007/978-3-030-05090-0_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05089-4
Online ISBN: 978-3-030-05090-0
eBook Packages: Computer ScienceComputer Science (R0)