Advertisement

Deep Group Residual Convolutional CTC Networks for Speech Recognition

  • Kai Wang
  • Donghai Guan
  • Bohan LiEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11323)

Abstract

End-to-end deep neural networks have been widely used in the literature to model 2D correlations in the audio signal. Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements across a wide variety of speech recognition tasks. Especially, CNNs effectively exploit temporal and spectral local correlations to gain translation invariance. However, all CNNs used in existing work assume each channel’s feature map is independent of each other, which may not fully utilize and combine information about input features. Meanwhile, most CNNs in literature use shallow layers may not be deep enough to capture all human speech signal information. In this paper, we propose a novel neural network, denoted as GRCNN-CTC, which integrates group residual convloutional blocks and recurrent layers paired with Connectionist Temporal Classification (CTC) loss. Experimental results show that our proposed GRCNN-CTC achieve 1.11% Word Error Rate (WER) and 0.48% Character Error Rate (CER) improvements on a subset of the LibriSpeech dataset compared to the baseline automatic speech recognition (ASR) system. In addition, our model greatly reduces computational overhead and converges faster, leading to scale up to deeper architecture.

Keywords

Residual neural network Group convolution Gated recurrent unit Connectionist temporal classification Speech recognition 

Notes

Acknowledgements

This work was supported by the Fundamental Research Funds for the Central Universities NS2018057, NJ20160028.

References

  1. 1.
    Yu, D., Li, J.: Recent progresses in deep learning based acoustic models. IEEE/CAA J. Autom. Sinica 4(3), 396–409 (2017)CrossRefGoogle Scholar
  2. 2.
    Rao, K., Sak, H., Prabhavalkar, R.: Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer (2018)Google Scholar
  3. 3.
    Hannun, A., Case, C., Casper, J., et al.: Deep speech: scaling up end-to-end speech recognition. Computer Science (2014)Google Scholar
  4. 4.
    Amodei, D., Anubhai, R., Battenberg, E., et al.: Deep speech 2: end-to-end speech recognition in english and mandarin. Computer Science (2015)Google Scholar
  5. 5.
    Chan, W., Jaitly, N., Le, Q.V., et al.: Listen, attend and spell. Computer Science (2015)Google Scholar
  6. 6.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  7. 7.
    Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18(5–6), 602–610 (2005)CrossRefGoogle Scholar
  8. 8.
    Jozefowicz, R., Zaremba, W., Sutskever, I.: An empirical exploration of recurrent network architectures. In: International Conference on International Conference on Machine Learning, pp. 2342–2350. JMLR.org (2015)Google Scholar
  9. 9.
    Sak, H., Senior, A., Beaufays, F.: Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: Fifteenth Annual Conference of the International Speech Communication Association (2014)Google Scholar
  10. 10.
    Graves, A., Mohamed, A., Hinton, G., Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6645–6649. IEEE (2013)Google Scholar
  11. 11.
    Abdel-Hamid, O., Mohamed, A., Jiang, H., et al.: Convolutional neural networks for speech recognition. IEEE/ACM Trans. Audio, Speech Lang. Process. 22(10), 1533–1545 (2014)CrossRefGoogle Scholar
  12. 12.
    Zhang, Y., Chan, W., Jaitly, N.: Very deep convolutional networks for end-to-end speech recognition. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4845–4849. IEEE (2017)Google Scholar
  13. 13.
    Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)
  14. 14.
    Wang, Y., Deng, X., Pu, S., et al.: Residual convolutional CTC networks for automatic speech recognition (2017)Google Scholar
  15. 15.
    He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  16. 16.
    Graves, A., Fernández, S., Gomez, F., et al.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 369–376. ACM (2006)Google Scholar
  17. 17.
    Sainath, T.N., Vinyals, O., Senior, A., et al.: Convolutional, long short-term memory, fully connected deep neural networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4580–4584. IEEE (2015)Google Scholar
  18. 18.
    Xie, S., Girshick, R., Dollár, P., et al.: Aggregated residual transformations for deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987–5995. IEEE (2017)Google Scholar
  19. 19.
    Panayotov, V., Chen, G., Povey, D., et al.: LibriSpeech: an ASR corpus based on public domain audio books. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206–5210. IEEE (2015)Google Scholar
  20. 20.
    Chen, W., Wang, S., Zhang, X., et al.: EEG-based motion intention recognition via multi-task RNNs. In: Proceedings of the 2018 SIAM International Conference on Data Mining (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.College of Computer Science and TechnologyNanjing University of Aeronautics and AstronauticsNanjingChina
  2. 2.Collaborative Innovation Center of Novel Software Technology and IndustrializationNanjingChina
  3. 3.Jiangsu Easymap Geographic Information Technology Corp., Ltd.YangzhouChina

Personalised recommendations