Advertisement

Using Convolutional Neural Networks to Automatically Detect Eye-Blink Artifacts in Magnetoencephalography Without Resorting to Electrooculography

  • Prabhat Garg
  • Elizabeth Davenport
  • Gowtham Murugesan
  • Ben Wagner
  • Christopher Whitlow
  • Joseph Maldjian
  • Albert MontilloEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10435)

Abstract

Magnetoencephelography (MEG) is a functional neuroimaging tool that records the magnetic fields induced by neuronal activity; however, signal from muscle activity often corrupts the data. Eye-blinks are one of the most common types of muscle artifact. They can be recorded by affixing eye proximal electrodes, as in electrooculography (EOG), however this complicates patient preparation and decreases comfort. Moreover, it can induce further muscular artifacts from facial twitching. We propose an EOG free, data driven approach. We begin with Independent Component Analysis (ICA), a well-known preprocessing approach that factors observed signal into statistically independent components. When applied to MEG, ICA can help separate neuronal components from non-neuronal ones, however, the components are randomly ordered. Thus, we develop a method to assign one of two labels, non-eye-blink or eye-blink, to each component.

Our contributions are two-fold. First, we develop a 10-layer Convolutional Neural Network (CNN), which directly labels eye-blink artifacts. Second, we visualize the learned spatial features using attention mapping, to reveal what it has learned and bolster confidence in the method’s ability to generalize to unseen data. We acquired 8-min, eyes open, resting state MEG from 44 subjects. We trained our method on the spatial maps from ICA of 14 subjects selected randomly with expertly labeled ground truth. We then tested on the remaining 30 subjects. Our approach achieves a test classification accuracy of 99.67%, sensitivity: 97.62%, specificity: 99.77%, and ROC AUC: 98.69%. We also show the learned spatial features correspond to those human experts typically use which corroborates our model’s validity. This work (1) facilitates creation of fully automated processing pipelines in MEG that need to remove motion artifacts related to eye blinks, and (2) potentially obviates the use of additional EOG electrodes for the recording of eye-blinks in MEG studies.

Keywords

MEG Eye-Blink Artifact EOG Automatic Deep learning CNN 

Notes

Acknowledgements

The authors would like to thank Jillian Urban, Mireille Kelley, Derek Jones, and Joel Stitzel for their assistance in providing recruitment and study oversight. Support for this research was provided by NIH grant R01NS082453 (JAM, JDS), R03NS088125 (JAM), and R01NS091602 (CW, JAM, JDS).

References

  1. 1.
    Fatima, Z., Quraan, M.A., Kovacevic, N., et al.: ICA-based artifact correction improves spatial localization of adaptive spatial filters in MEG. NeuroImage 78, 284–294 (2013)CrossRefGoogle Scholar
  2. 2.
    Duan, F., Phothisonothai, M., Kikuchi, M., et al.: Boosting specificity of MEG artifact removal by weighted support vector machine. In: Conference proceedings: Annual International Conference of the IEEE EMBS. Annual Conference 2013, pp. 6039–6042 (2013)Google Scholar
  3. 3.
    Muthukumaraswamy, S.D.: High-frequency brain activity and muscle artifacts in MEG/EEG: a review and recommendations. Front Hum. Neurosci. 7, 138 (2013)CrossRefGoogle Scholar
  4. 4.
    Buzsáki, G., Ca, Anastassiou, Koch, C.: The origin of extracellular fields and currents-EEG, ECoG: LFP and spikes. Nat. Rev. Neurosci. 13(6), 407–420 (2012)CrossRefGoogle Scholar
  5. 5.
    Criswell, E.: Cram’s Introduction to Surface Electromyography (2011)Google Scholar
  6. 6.
    Gonzalez-Moreno, A., Aurtenetxe, S., Lopez-Garcia, M.E., et al.: Signal-to-noise ratio of the MEG signal after preprocessing. J. Neurosci. Methods 222, 56–61 (2014)CrossRefGoogle Scholar
  7. 7.
    Gross, J., Baillet, S., Barnes, G.R., et al.: Good practice for conducting and reporting MEG research. NeuroImage 65, 349–363 (2013)CrossRefGoogle Scholar
  8. 8.
    Breuer, L., Dammers, J., Roberts, T.P.L., et al.: Ocular and cardiac artifact rejection for real-time analysis in MEG. J. Neurosci. Methods 233, 105–114 (2014)CrossRefGoogle Scholar
  9. 9.
    Roy, R.N., Charbonnier, S., Bonnet, S.: Eye blink characterization from frontal EEG electrodes using source separation and pattern recognition algorithms. Biomed. Sig. Process. Control 14, 256–264 (2014)CrossRefGoogle Scholar
  10. 10.
    Davenport, E.M., Whitlow, C.T., Urban, J.E., et al.: Abnormal white matter integrity related to head impact exposure in a season of high school varsity football. J. Neurotrauma 31(19), 1617–1624 (2014)CrossRefGoogle Scholar
  11. 11.
    Bell, A.J., Sejnowski, T.J.: An information-maximization approach to blind separation and blind deconvolution. Neural Comput. 7(6), 1129–1159 (1995)CrossRefGoogle Scholar
  12. 12.
    Tadel, F., Baillet, S., Mosher, J.C., et al.: Brainstorm: a user-friendly application for MEG/EEG analysis. Comput. Intell. Neurosci. 2011, 879716 (2011). doi: 10.1155/2011/879716 CrossRefGoogle Scholar
  13. 13.
    Russakovsky, O., Deng, J., Su, H., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1–9 (2012)Google Scholar
  15. 15.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICRL), pp. 1–14 (2015)Google Scholar
  16. 16.
    Srivastava, N., Hinton, G., Krizhevsky, A., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Ioffe, S., Szegedy, C.: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Arxiv, pp. 1–11 (2015)Google Scholar
  18. 18.
    Kingma, D.P., Adam, B.J.: A Method for Stochastic Optimization. CoRR abs/1412.6980Google Scholar
  19. 19.
    Simonyan, K., Vedaldi, A., Zisserman, A.: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. CoRR abs/1312.6034 (2013)Google Scholar
  20. 20.
    Selvaraju, R.R., Das, A., Vedantam, R., et al.: Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization(Nips), pp. 1–5 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Prabhat Garg
    • 1
  • Elizabeth Davenport
    • 1
  • Gowtham Murugesan
    • 1
  • Ben Wagner
    • 1
  • Christopher Whitlow
    • 2
  • Joseph Maldjian
    • 1
  • Albert Montillo
    • 1
    Email author
  1. 1.UT Southwestern Medical CenterDallasUSA
  2. 2.Wake Forest School of MedicineWinston-SalemUSA

Personalised recommendations