Advertisement

Acoustic Monitoring – A Deep LSTM Approach for a Material Transport Process

  • Adnan HusakovićEmail author
  • Anna Mayrhofer
  • Eugen Pfann
  • Mario Huemer
  • Andreas Gaich
  • Thomas Kühas
Conference paper
  • 73 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12014)

Abstract

Robust classification strongly depends on the combination of properly chosen features and the classification algorithm. This paper investigates an autoencoder for feature fusion together with recurrent neural networks such as the Long Short-Term Memory neural networks (LSTMs) in different configurations applied to a dataset of a material transport process. As an important outcome the investigations show that the application of features acquired from the autoencoder bottleneck layer in combination with a bidirectional LSTM improve the classification algorithm significantly and require fewer features in comparison to standard machine learning algorithms.

Keywords

Autoencoder Deep learning Feature fusion LSTM Signal processing 

Notes

Acknowledgment

This work has been supported by the COMET-K2 “Center for Symbiotic Mechatronics” of the Linz Center of Mechatronics (LCM) funded by the Austrian federal government and the federal state of Upper Austria.

References

  1. 1.
    Berckmans, D., Janssens, K., Van der Auweraer, H., Sas, P., Desmet, W.: Model-based synthesis of aircraft noise to quantify human perception of sound quality and annoyance. J. Sound Vib. 311(3–5), 1175–1195 (2008).  https://doi.org/10.1016/j.jsv.2007.10.018CrossRefGoogle Scholar
  2. 2.
    Husakovic, A., Pfann, E., Huemer, M.: Robust machine learning based acoustic classification of a material transport process. In: Proceedings of the 14 Symposium on Neural Networks and Applications (NEUREL), Belgrade, Serbia (2018).  https://doi.org/10.1109/NEUREL.2018.8587031
  3. 3.
    Bae, S.H., Choi, I., Soo Kim, N.: Acoustic scene classification using parallel combination of LSTM and CNN, DCASE2016 challenge. Technical report, Budapest, Hungary (2016)Google Scholar
  4. 4.
    Han, K., Wang, Y., Zhang, C., Lee, C., Hu, C.: Autoencoder inspired unsupervised feature selection. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, Canada, pp. 2941–2945 (2018).  https://doi.org/10.1109/ICASSP.2018.8462261
  5. 5.
    Huang, K., Wu, C., Yang, T., Su, M., Chou, J.: Speech emotion recognition using autoencoder bottleneck features and LSTM. In: Proceedings of the 2016 International Conference on Orange Technologies (ICOT), Melbourne, Australia, pp. 1–4 (2016).  https://doi.org/10.1109/ICOT.2016.8278965
  6. 6.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, pp. 1026–1034 (2015).  https://doi.org/10.1109/ICCV.2015.123
  7. 7.
    Nguyen, T., Pernkopf, F.: Acoustic scene classification using a convolutional neural network ensemble and nearest neighbor filters. In: Proceedings of the Detection and Classification of Acoustic Scenes and Events 2018 Workshop (DCASE), Tampere, Finland, pp. 34–38 (2018)Google Scholar
  8. 8.
    Han, Y., Lee, K.: Acoustic scene classification using convolutional neural network and multiple-width frequency-delta data augmentation. arXiv preprint arXiv:1607.02383 (2016)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Adnan Husaković
    • 1
    Email author
  • Anna Mayrhofer
    • 1
  • Eugen Pfann
    • 2
  • Mario Huemer
    • 2
  • Andreas Gaich
    • 2
  • Thomas Kühas
    • 1
  1. 1.Primetals Technologies Austria GmbHLinzAustria
  2. 2.Institute of Signal ProcessingJohannes Kepler UniversityLinzAustria

Personalised recommendations