Advertisement

Full-Field Mode Shape Analysis, Alignment and Averaging Across Measurements

  • Wesley Scott
  • Matthew Adams
  • Yongchao Yang
  • David MascareñasEmail author
Conference paper
Part of the Conference Proceedings of the Society for Experimental Mechanics Series book series (CPSEMS)

Abstract

Noncontact methods of experimentally acquiring mode shapes and associated natural frequencies eliminate errors induced by mass-weighting of the structure by sensors. Traditional data acquisition methods require costly and delicate sensors such as accelerometers and strain gauges that are time-consuming to setup on each structure needing to be analyzed. Other non-contact data acquisition tools such as Laser Doppler Vibrometers (LDVs) and Digital Image Correlation (DIC) require expensive equipment and placement of speckle patters or high-contrast markers on the structure. Digital video cameras provide a relatively low-cost and portable method to measure a structure with high spatial resolution without needing to modify the structure. Previous work identified a novel variation on Operational Modal Analysis (OMA) to identify full-field mode shapes from video data. This work develops the algorithm’s robustness, investigating effects of camera motion, structure excitation type, and background intensity gradients. Camera motion and modal over-specification are shown to cause identification of modes that do not correspond to physical deformations of the structure. Previously, video stabilization algorithms have been used to eliminate camera motion from video data. These algorithms eliminate most camera motion, but residual motion remains and is identified in additional, spurious mode shapes. When the camera motion is oscillatory, these shapes can be correlated in the frequency domain to the spectrum seen by an accelerometer placed on the camera itself. Averaging techniques are implemented to improve mode shape quality and identify structural and camera modes from spurious modes identified from modal over-specification. When robustly understood, identification of full-field mode shapes and properties can cheaply and efficiently advance structural health monitoring, model verification and updating, change detection, load identification, and other fields of structural dynamics.

Keywords

Full-field Blind-source separation Video Registration 

Notes

Acknowledgments

Los Alamos National Laboratory is operated by Los Alamos National Security LLC, for the National Nuclear Security Administration of the U.S. Department of Energy, under DOE Contract DE-AC52-06NA25396.

References

  1. 1.
    Yang, Y., Mancini, C.D.T., Talken, Z., Kenyon, G., Farrar, C., Mascarenas, D.: Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification. Mech. Syst. Signal Process. 85, 567–590 (2016)CrossRefGoogle Scholar
  2. 2.
    Yang, Y., Dorn, C., Mancini, T., Talken, Z., Theiler, J., Kenyon, G., Farrar, C., Mascarenas, D.: Reference-free detection of minute, non-visible, damage using full-field, high-resolution mode shapes output-only identified from digital videos of structures. Struct. Health Monitor. 17(3), 514–531 (2018)CrossRefGoogle Scholar
  3. 3.
    Yang, Y., Dorn, C., Mancini, T., Talken, Z., Nagarajaiah, S., Kenyon, G., Farrar, C., Mascarenas, D.: Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements. Sound Vib. 390, 232–256 (2017)CrossRefGoogle Scholar
  4. 4.
    Dasari, S., Dorn, C., Yang, Y., Larson, A., Mascarenas, D.: A framework for the identification of full-field structural dynamics using sequences of images in the presence of non-ideal operating conditions. Intell. Mater. Syst. Struct. 1–26 (2018)Google Scholar
  5. 5.
    Dorn, C., Dasari, S., Yang, Y., Farrar, C., Kenyon, G., Welch, P., Mascareñas, D.: Efficient full-field vibration measurements and operational modal analysis using neuromorphic event-based imaging. J. Eng. Mech. 144(7), (2018)CrossRefGoogle Scholar
  6. 6.
    Nebehay, G., Pflugfelder, R.: Clustering of static-adaptive correspondences for deformable object tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA (2015)Google Scholar
  7. 7.
    Fleet, D.J., Jepson, A.D.: Computation of component image velocity from local phase information. Int. J. Comput. Vision. 5(1), 77–104 (1990)CrossRefGoogle Scholar
  8. 8.
    Wadhwa, N., Rubinstein, M., Durand, F., Freeman, W.T.: Phase-based video motion processing. ACM Trans. Graph. (Proceedings SIGGRAPH). 32(4), (2013). http://people.csail.mit.edu/nwadhwa/phase-video/ CrossRefGoogle Scholar
  9. 9.
    Simoncelli, E.P.: matlabPyrTools, 12 2009 (Online)Google Scholar
  10. 10.
    Kerschen, G., Poncelet, F., Golinval, J.-C.: Physical interpretation of independent component analysis in structural dynamics. Mech. Syst. Signal Process. 21, 1561–1575 (2007)CrossRefGoogle Scholar
  11. 11.
    Poncelet, F., Kershen, G., Golinval, D.V.J.-C.: Output-only modal analysis using blind source separation techniques. Mech. Syst. Signal Process. 21, 2335–2358 (2007)CrossRefGoogle Scholar
  12. 12.
    Stone, J.V.: Blind Source Separation Using Temporal Predictability. Neural Comput. 13(7), (2001)CrossRefGoogle Scholar
  13. 13.
    MathWorks, 2-D correlation coefficient, MathWorks. https://www.mathworks.com/help/images/ref/corr2.html (Online). Accessed 10 May 2018

Copyright information

© Society for Experimental Mechanics, Inc. 2020

Authors and Affiliations

  • Wesley Scott
    • 1
  • Matthew Adams
    • 1
  • Yongchao Yang
    • 2
  • David Mascareñas
    • 1
    Email author
  1. 1.Engineering InstituteLos Alamos National LaboratoryLos AlamosUSA
  2. 2.Argonne National LaboratoryLemontUSA

Personalised recommendations