Skip to main content

Recurrent Auto-Encoder Model for Large-Scale Industrial Sensor Signal Analysis

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 893))

Abstract

Recurrent auto-encoder model summarises sequential data through an encoder structure into a fixed-length vector and then reconstructs the original sequence through the decoder structure. The summarised vector can be used to represent time series features. In this paper, we propose relaxing the dimensionality of the decoder output so that it performs partial reconstruction. The fixed-length vector therefore represents features in the selected dimensions only. In addition, we propose using rolling fixed window approach to generate training samples from unbounded time series data. The change of time series features over time can be summarised as a smooth trajectory path. The fixed-length vectors are further analysed using additional visualisation and unsupervised clustering techniques. The proposed method can be applied in large-scale industrial processes for sensors signal analysis purpose, where clusters of the vector representations can reflect the operating states of the industrial system.

Supported by Centrica plc. Registered office: Millstream, Maidenhead Road, Windsor SL4 5GD, United Kingdom.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    A simplified process diagram of the compression train can be found in Fig. 6 at the appendix.

  2. 2.

    A list of sensors is available in the appendix.

References

  1. Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E.: The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min. Knowl. Discov. 31(3), 606–660 (2017). https://doi.org/10.1007/s10618-016-0483-9

    Article  MathSciNet  Google Scholar 

  2. Cho, K., van Merrienboer, B., Gülçehre, Ç., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR abs/1406.1078 (2014). http://arxiv.org/abs/1406.1078

  3. Chung, Y., Wu, C., Shen, C., Lee, H., Lee, L.: Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder. CoRR abs/1603.00982 (2016). http://arxiv.org/abs/1603.00982

  4. D’Avino, D., Cozzolino, D., Poggi, G., Verdoliva, L.: Autoencoder with recurrent neural networks for video forgery detection. CoRR abs/1708.08754 (2017). http://arxiv.org/abs/1708.08754

    Article  Google Scholar 

  5. Gillian, N.E., Knapp, R.B., O’Modhrain, M.S.: Recognition of multivariate temporal musical gestures using n-dimensional dynamic time warping. In: NIME (2011)

    Google Scholar 

  6. Giorgino, T.: Computing and visualizing dynamic time warping alignments in R: the dtw package. J. Stat. Softw. 31(7), 1–24 (2009). https://doi.org/10.18637/jss.v031.i07

    Article  Google Scholar 

  7. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

    Article  Google Scholar 

  8. ten Holt, G., Reinders, M., Hendriks, E.: Multi-dimensional dynamic time warping for gesture recognition (2007)

    Google Scholar 

  9. Hsu, D.: Time series compression based on adaptive piecewise recurrent autoencoder. CoRR abs/1707.07961 (2017). http://arxiv.org/abs/1707.07961

  10. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014). http://arxiv.org/abs/1412.6980

  11. Ko, M.H., West, G., Venkatesh, S., Kumar, M.: Online context recognition in multisensor systems using dynamic time warping. In: 2005 International Conference on Intelligent Sensors, Sensor Networks and Information Processing, pp. 283–288, December 2005. https://doi.org/10.1109/ISSNIP.2005.1595593

  12. Lee, D.: Anomaly Detection in Multivariate Non-stationary Time Series for Automatic DBMS Diagnosis. ArXiv e-prints, August 2017

    Google Scholar 

  13. Liu, J., Wang, Z., Zhong, L., Wickramasuriya, J., Vasudevan, V.: uWave: accelerometer-based personalized gesture recognition and its applications. In: 2009 IEEE International Conference on Pervasive Computing and Communications, pp. 1–9, March 2009. https://doi.org/10.1109/PERCOM.2009.4912759

  14. Malhotra, P., TV, V., Vig, L., Agarwal, P., Shroff, G.: TimeNet: pre-trained deep recurrent neural network for time series classification. CoRR abs/1706.08838 (2017). http://arxiv.org/abs/1706.08838

  15. Petitjean, F., Inglada, J., Gancarski, P.: Satellite image time series analysis under time warping. IEEE Trans. Geosci. Remote Sens. 50(8), 3081–3095 (2012). https://doi.org/10.1109/TGRS.2011.2179050

    Article  Google Scholar 

  16. Shokoohi-Yekta, M., Hu, B., Jin, H., Wang, J., Keogh, E.: Generalizing DTW to the multi-dimensional case requires an adaptive approach. Data Min. Knowl. Discov. 31(1), 1–31 (2017). https://doi.org/10.1007/s10618-016-0455-0

    Article  MathSciNet  Google Scholar 

  17. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014). http://jmlr.org/papers/v15/srivastava14a.html

    MathSciNet  MATH  Google Scholar 

  18. Srivastava, N., Mansimov, E., Salakhutdinov, R.: Unsupervised learning of video representations using lstms. CoRR abs/1502.04681 (2015). http://arxiv.org/abs/1502.04681

  19. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. CoRR abs/1409.3215 (2014). http://arxiv.org/abs/1409.3215

  20. Vlachos, M., Hadjieleftheriou, M., Gunopulos, D., Keogh, E.: Indexing multidimensional time-series. VLDB J. 15(1), 1–20 (2006). https://doi.org/10.1007/s00778-004-0144-2

    Article  Google Scholar 

  21. Wang, J., Balasubramanian, A., Mojica de la Vega, L., Green, J., Samal, A., Prabhakaran, B.: Word recognition from continuous articulatory movement time-series data using symbolic representations (2013)

    Google Scholar 

  22. Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization. CoRR abs/1409.2329 (2014). http://arxiv.org/abs/1409.2329

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Timothy Wong .

Editor information

Editors and Affiliations

Appendices

Appendix A

The rotary components are driven by industrial RB-211 jet turbine on a single shaft through a gearbox. Incoming natural gas passes through the low pressure (LP) stage first which brings it to an intermediate pressure level, it then passes through the high pressure (HP) stage and reaches the pre-set desired pressure level. The purpose of the suction scrubber is to remove any remaining condensate from the gas prior to feeding through the centrifugal compressors. Once the hot compressed gas is discharged from the compressor, its temperature is lowered via the intercoolers (Fig. 7).

Fig. 6.
figure 6

A simplified process diagram of the two-stage centrifugal compression train which is located at a natural gas terminal.

Fig. 7.
figure 7

Locations of key components around the centrifugal compressor.

Appendix B

The sensor measurements used in the analysis are listed below:

figure b

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wong, T., Luo, Z. (2018). Recurrent Auto-Encoder Model for Large-Scale Industrial Sensor Signal Analysis. In: Pimenidis, E., Jayne, C. (eds) Engineering Applications of Neural Networks. EANN 2018. Communications in Computer and Information Science, vol 893. Springer, Cham. https://doi.org/10.1007/978-3-319-98204-5_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-98204-5_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-98203-8

  • Online ISBN: 978-3-319-98204-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics