Estimation of Gaussian overlapping nuclear pulse parameters based on a deep learning LSTM model

  • Xing-Ke Ma
  • Hong-Quan HuangEmail author
  • Qian-Cheng Wang
  • Jing Zhao
  • Fei Yang
  • Kai-Ming Jiang
  • Wei-Cheng Ding
  • Wei Zhou


A long short-term memory (LSTM) neural network has excellent learning ability applicable to time series of nuclear pulse signals. It can accurately estimate parameters associated with amplitude, time, and so on, in digitally shaped nuclear pulse signals—especially signals from overlapping pulses. By learning the mapping relationship between Gaussian overlapping pulses after digital shaping and exponential pulses before shaping, the shaping parameters of the overlapping exponential nuclear pulses can be estimated using the LSTM model. Firstly, the Gaussian overlapping nuclear pulse (ONP) parameters which need to be estimated received Gaussian digital shaping treatment, after superposition by multiple exponential nuclear pulses. Secondly, a dataset containing multiple samples was produced, each containing a sequence of sample values from Gaussian ONP, after digital shaping, and a set of shaping parameters from exponential pulses before digital shaping. Thirdly, the Training Set in the dataset was used to train the LSTM model. From these datasets, the values sampled from the Gaussian ONP were used as the input data for the LSTM model, and the pulse parameters estimated by the current LSTM model were calculated by forward propagation. Next, the loss function was used to calculate the loss value between the network-estimated pulse parameters and the actual pulse parameters. Then, a gradient-based optimization algorithm was applied, to feedback the loss value and the gradient of the loss function to the neural network, to update the weight of the LSTM model, thereby achieving the purpose of training the network. Finally, the sampled value of the Gaussian ONP for which the shaping parameters needed to be estimated was used as the input data for the LSTM model. After this, the LSTM model produced the required nuclear pulse parameter set. In summary, experimental results showed that the proposed method overcame the defect of local convergence encountered in traditional methods and could accurately extract parameters from multiple, severely overlapping Gaussian pulses, to achieve optimal estimation of nuclear pulse parameters in the global sense. These results support the conclusion that this is a good method for estimating nuclear pulse parameters.


Nuclear pulses S–K digital shaping Deep learning LSTM 


  1. 1.
    F.S. Goulding, Pulses-shaping in low-noise nuclear amplifiers: a physical approach to noise analysis. Nucl. Instrum. Meth. A 100, 493–504 (1972). CrossRefGoogle Scholar
  2. 2.
    G. Gerardi, L. Abbene, A. La Manna et al., Digital filtering and analysis for a semiconductor X-ray detector data acquisition. Nucl. Instrum. Meth. A 571, 378–380 (2007). CrossRefGoogle Scholar
  3. 3.
    T. Noulis, C. Deradonis, S. Siskos et al., Particle detector tunable monolithic Semi-Gaussian shaping filter based on transconductance amplifiers. Nucl. Instrum. Meth. A 589, 330–337 (2008). CrossRefGoogle Scholar
  4. 4.
    S.G. Chen, S.Y. Ji, WS Liu (2018) Gaussian pulses shaping of exponential decay signal based on wavelet analysis. Acta Phys Sin-Chin Ed. 57, 2882–2887 (2018). (in Chinese) CrossRefGoogle Scholar
  5. 5.
    S.G. Chen, S.Y. Ji, W.S. Liu et al., Recursive implementation of Gaussian pulse shaping based on wavelet analysis. Acta Phys Sin-Chin Ed. 58(5), 3041–3046 (2009). (in Chinese) CrossRefGoogle Scholar
  6. 6.
    K.M. Jiang, H.Q. Huang, X.F. Yang et al., Pulses parameter extraction method based on S-K digital shaping and population technique. Nuclear Electron. Detect. Technol. 37, 2 (2017). (in Chinese) Google Scholar
  7. 7.
    H.Q. Huang, X.F. Yang, W.C. Ding et al., Estimation method for parameters of overlapping nuclear pulses signal. Nucl. Sci. Tech. 28, 12 (2017). CrossRefGoogle Scholar
  8. 8.
    G.E. Hinton, R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. Science 313, 5786 (2006). MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    G. Dorffner, Neural networks for time series processing. Neural Netw World. 6, 1 (1996)Google Scholar
  10. 10.
    Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature 521, 7553 (2015). CrossRefGoogle Scholar
  11. 11.
    Y. Lecun, L. Bottou, Y. Bengio et al., Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998). CrossRefGoogle Scholar
  12. 12.
    J. Du, B.L. Hu, Y.Z. Liu et al., Study on quality identification of macadamia nut based on convolutional neural networks and spectral features. Spectrosc Spect Anal. 38, 1514 (2018). CrossRefGoogle Scholar
  13. 13.
    A. Graves, Generating sequences with recurrent neural networks (2013). arXiv:1308.0850
  14. 14.
    A. Graves, A. Mohamed, G. Hinton, Speech recognition with deep recurrent neural networks (2013). arXiv:1303.5778
  15. 15.
    R. Pascanu, C. Gulcehre, K. Cho et al., How to construct deep recurrent neural networks (2013). arXiv:1312.6026
  16. 16.
    S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural. Comput. 9, 1735–1780 (1997). CrossRefGoogle Scholar
  17. 17.
    A. Graves, M. Liwicki, S. Fernandez et al., A novel connectionist system for unconstrained handwriting recognition. IEEE Trans. Pattern Anal. 31, 855–868 (2009). CrossRefGoogle Scholar
  18. 18.
    F.A. Gers, D. Eck, J. Schmidhuber, Artificial neural Networks-ICANN (Vienna Univ Technol, Vienna, 2001), pp. 669–676Google Scholar
  19. 19.
    P.J. Werbos, Backpropagation through time: what it does and how to do it. Proc. IEEE 78, 1550–1560 (1990). CrossRefGoogle Scholar
  20. 20.
    A. Graves, J. Schmidhuber, Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18, 602–610 (2005). CrossRefGoogle Scholar
  21. 21.
    S. Amari, Backpropagation and stochastic gradient descent method. Neurocomputing 5, 185–196 (1993). CrossRefzbMATHGoogle Scholar
  22. 22.
    J. Duchi, E. Hazan, Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)MathSciNetzbMATHGoogle Scholar
  23. 23.
    S. Yeung, O. Russakovsky, N. Jin et al., Every moment counts: dense detailed labeling of actions in complex videos. Int. J. Comput. Vis. 126, 375–389 (2018). MathSciNetCrossRefGoogle Scholar
  24. 24.
    Kingma, P. Diederik, J. Ba, Adam: A Method for stochastic optimization (2014). arXiv:1412.6980
  25. 25.
    G.E. Hinton, N. Srivastava, A. Krizhevsky et al., Improving neural networks by preventing co-adaptation of feature detectors (2012). arXiv:1207.0580
  26. 26.
    A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017). CrossRefGoogle Scholar
  27. 27.
    X. Bouthillier, K. Konda, P. Vincent, et al., Dropout as data augmentation (2015). arXiv:1506.08700

Copyright information

© China Science Publishing & Media Ltd. (Science Press), Shanghai Institute of Applied Physics, the Chinese Academy of Sciences, Chinese Nuclear Society and Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Xing-Ke Ma
    • 1
  • Hong-Quan Huang
    • 1
    Email author
  • Qian-Cheng Wang
    • 1
  • Jing Zhao
    • 1
  • Fei Yang
    • 1
  • Kai-Ming Jiang
    • 1
  • Wei-Cheng Ding
    • 1
  • Wei Zhou
    • 1
  1. 1.College of Nuclear Technology and Automation EngineeringChengdu University of TechnologyChengduChina

Personalised recommendations