Skip to main content
Log in

Acoustic Beamforming Algorithms and Their Applications in Environmental Noise

  • Published:
Current Pollution Reports Aims and scope Submit manuscript

Abstract

Purpose of Review

Rather than broadly investigating the beamforming field, the present work has the distinctive feature of analyzing the most common algorithms through both a theoretical presentation and a report of their most recent applications to real cases. The intent is to take a step forward towards the harmonization of the sector with a combined approach that could prove to be useful for academics willing to understand the theory and for technicians needing to choose the best algorithms to use in different measurement conditions.

Recent Findings

In recent years, the sector has seen the growth of studies published on the use of beamforming techniques and their applications to real cases. Unfortunately, different authors and research groups developed so many different algorithms that a literature review turns out to be essential to increase awareness and to avoid confusion for both scientists and technicians.

Summary

Nowadays, acoustic cameras have been proven to be powerful instruments that combine a video acquisition with a microphone array to obtain real-time information about the location of noise sources. Different beamforming techniques can be applied to sound signals allowing their visualization or distinguishing the contribution of multiple different emitters. This quality, historically used in different branches of acoustics, is now spreading into environmental noise protection, especially where it is needed to locate the emitters or to better study sources that have not yet been characterized. Acoustic cameras can also be used to identify the responsible for noise limits exceedings at receivers when traditional microphone measurements are not enough, or to identify potential leakages occurred in the installation of noise abatement measurements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

Papers of particular interest, published recently, have been highlighted as: • Of importance

  1. Billingsley J, Kinns R. The acoustic telescope. 1976;48(4):485–510. https://doi.org/10.1016/0022-460X(76)90552-6.

    Article  Google Scholar 

  2. Michel U. History of acoustic beamforming. In: Proceedings of the Berlin Beamforming Conference. 2006. p. 1–17.

  3. Acoular. Acoustic testing and source mapping software. http://acoular.org/. (Last accessed date 26/05/2023).

  4. Licitra G, Fredianelli L, Kanka S, Artuso F, Fidecaro F. Acoustic comfort in yachts: Measurements with acoustic camera. In: Proceedings of the 28th International Congress on Sound and Vibration. Singapore; 2022. p. 24–8.

  5. Kanka S, Fredianelli L, Artuso F, Fidecaro F, Licitra G. Evaluation of acoustic comfort and sound energy transmission in a yacht. Energies. 2023;16(2):808. https://doi.org/10.3390/en16020808.

    Article  Google Scholar 

  6. Castellini P, Sassaroli A. Acoustic source localization in a reverberant environment by average beamforming. Mech Syst Signal Process. 2010;24(3):796–808. https://doi.org/10.1016/j.ymssp.2009.10.021.

    Article  Google Scholar 

  7. Noh H-M, Choi J-W. Identification of low-frequency noise sources in high-speed train via resolution improvement. J Mech Sci Technol. 2015;29:3609–15. https://doi.org/10.1007/s12206-015-0804-8.

    Article  Google Scholar 

  8. Ballesteros JA, Sarradj E, Fernandez MD, Geyer TF, Ballesteros MJ. Noise source identification with beamforming in the pass-by of a car. Appl Acoust. 2015;93:106–19. https://doi.org/10.1016/j.apacoust.2015.01.019.

    Article  Google Scholar 

  9. Bourgeois J, Minker W. Time-domain beamforming and blind source separation: Speech input in the car environment, vol. 3. 2009. https://doi.org/10.1007/978-0-387-68836-7.

  10. Huanxian B, Huang X, Zhang X. An overview of testing methods for aeroengine fan noise. Prog Aerosp Sci. 2021;124. https://doi.org/10.1016/j.paerosci.2021.100722.

  11. Joshi A, Rahman MM, Hickey J-P. Recent advances in passive acoustic localization methods via aircraft and wake vortex aeroacoustics. Fluids. 2022;7(7). https://doi.org/10.3390/fluids7070218.

  12. Bu H, Huang X, Zhang X. An overview of testing methods for aeroengine fan noise. Prog Aerosp Sci. 2021;124:100722. https://doi.org/10.1016/j.paerosci.2021.100722.

    Article  Google Scholar 

  13. Martin G, Simon F, Biron D. Detection of acoustic radiating areas of a generic helicopter cabin by beamforming. J Acoust Soc Am. 2008;123(5):3310–3310.

    Article  Google Scholar 

  14. Sun S, Wang T, Yang H, Chu F. Damage identification of wind turbine blades using an adaptive method for compressive beamforming based on the generalized minimax-concave penalty function. Renew Energy. 2021;181. https://doi.org/10.1016/j.renene.2021.09.024.

  15. Wang W, Xue Y, He C, Zhao Y. Review of the typical damage and damage-detection methods of large wind turbine blades. Energies. 2022;15(15). https://doi.org/10.3390/en15155672.

  16. Malgoezar A, Vieira A, Snellen M, Simons D, Veldhuis L. Experimental characterization of noise radiation from a ducted propeller of an unmanned aerial vehicle. Int J Aeroacoust. 2019;18:1475472–985295. https://doi.org/10.1177/1475472X19852952.

    Article  Google Scholar 

  17. Sahu S, Kumar K, Majumdar A, Kumar AA, Chandra MG. Acoustic-based machine anomaly detection using beamforming and sequential transform learning. 2023;7(2). https://doi.org/10.1109/LSENS.2023.3235049.

  18. Benedek T, Tóth P. Beamforming measurements of an axial fan in an industrial environment. 2013;57(2):37–46. https://doi.org/10.3311/PPme.7043.

    Article  Google Scholar 

  19. Lanslots J, Deblauwe F, Janssens K. Selecting sound source localization techniques for industrial applications. 2010;44(6):6–10.

    Google Scholar 

  20. Amoiridis O, Zarri A, Zamponi R, Pasco Y, Yakhina G, Moreau S, Christophe J, Schram C. Sound localization and quantification analysis of an automotive engine cooling module. J Sound Vib. 2021. https://doi.org/10.1016/j.jsv.2021.116534.

  21. Bocanegra JA, Borelli D, Gaggero T, Rizzuto E, Schenone C. A novel approach to port noise characterization using an acoustic camera. Sci Total Environ. 2022;808:151903. https://doi.org/10.1016/j.scitotenv.2021.151903.

    Article  CAS  Google Scholar 

  22. Fredianelli L, Bernardini M, Tonetti F, Artuso F, Fidecaro F, Licitra G. Acoustic source localization in ports with different beamforming algorithms. In: Proceedings of 51st INTER-NOISE Congress. Glasgow; 2022. p. 21–4.

  23. Wijnings PWA, Stuijk S, Vries BD, Corporaal H. Robust Bayesian beamforming for sources at different distances with applications in urban monitoring. 2019;4325–9. https://doi.org/10.1109/ICASSP.2019.8682835.

  24. Leiba R, Ollivier F, Marchal J, Misdariis N, Marchiano R. Large array of microphones for the automatic recognition of acoustic sources in urban environment, vol. 2017. 2017.

  25. Wajid M, Alam F, Yadav S, Khan MA, Usman M. Support vector regression based direction of arrival estimation of an acoustic source. 2020. https://doi.org/10.1109/3ICT51146.2020.9311948.

  26. Jin J, Pan N, Chen J, Benesty J, Yang Y. A binaural heterophasic adaptive beamformer and its deep learning assisted implementation. 2023;168:24–30. https://doi.org/10.1016/j.patrec.2023.02.025.

    Article  Google Scholar 

  27. Feng L, Zan M, Huang L, Xu Z. A double-step grid-free method for sound source identification using deep learning. 2022;201. https://doi.org/10.1016/j.apacoust.2022.109099.

  28. Šarić Z, Subotić M, Bilibajkić R, Barjaktarović M, Stojanović J. Supervised speech separation combined with adaptive beamforming. 2022;76. https://doi.org/10.1016/j.csl.2022.101409.

  29. • Leclère Q, Pereira A, Bailly C, Antoni J, Picard C. A unified formalism for acoustic imaging based on microphone array measurements. Int J Aeroacoust. 2017;16:431–56. https://doi.org/10.1177/1475472X17718883. This reference stands out among other works in literature because it attempts to provide a unified formalism of the different imaging techniques. This harmonization attempt can represent a decisive step forward in the development of this research field.

    Article  Google Scholar 

  30. • Chiariotti P, Martarelli M, Castellini P. Acoustic beamforming for noise source localization - reviews, methodology and applications. Mech Syst Signal Process. 2019;120:422–48. https://doi.org/10.1016/j.ymssp.2018.09.019. This reference is marked as important because it effectively introduces to the beamforming topic, starting from the basic concepts on to the most advanced algorithms, passing through related topics which are useful to the complete construction of the framework.

    Article  Google Scholar 

  31. • Merino-Martínez R, Sijtsma P, Snellen M, Ahlefeldt T, Antoni J, Bahr CJ, Blacodon D, Ernst D, Finez A, Funke S, Geyer TF, Haxter S, Herold G, Huang X, Humphreys WM, Leclère Q, Malgoezar A, Michel U, Padois T, Pereira A, Picard C, Sarradj E, Siller HA, Simons DG, Spehr C. A review of acoustic imaging methods using phased microphone arrays. CEAS Aeronaut J. 2019;10:197–230. https://doi.org/10.1007/S13272-019-00383-4. This review deserves to be emphasized because it attempts to point out the suitability of the different techniques according to the different on-field scenarios. This suggestions are based both on a theoretical and an experimental analyses of the current state-of-the-art.

    Article  Google Scholar 

  32. Allen CS, Blake WK, Dougherty RP, Lynch D, Soderman PT, Underbrink JR. Aeroacoustic measurements. 2002.

  33. Merino-Martinez R, Snellen M, Simons DG. Functional beamforming applied to full scale landing aircraft. In: 6th Berlin Beamforming Conference. 2016.

  34. Dougherty R. Functional beamforming for aeroacoustic source distributions. In: 20th AIAA/CEAS Aeroacoustics Conference. 2014. https://doi.org/10.2514/6.2014-3066.

  35. Stoica P, Wang Z, Li J. Robust capon beamforming. IEEE Signal Process Lett. 2003;10(6):172–5. https://doi.org/10.1109/LSP.2003.811637.

    Article  Google Scholar 

  36. Sijtsma P. Clean based on spatial source coherence. Int J Aeroacoust. 2007;6(4):357–74. https://doi.org/10.1260/147547207783359459.

    Article  Google Scholar 

  37. Brooks TF, Humphreys WMA. deconvolution approach for the mapping of acoustic sources (DAMAS) determined from phased microphone arrays. J Sound Vib. 2006;294(4–5):856–79. https://doi.org/10.1016/j.jsv.2005.12.046.

    Article  Google Scholar 

  38. Gupta P, Kar SP. Music and improved music algorithm to estimate direction of arrival. In: 2015 International Conference on Communications and Signal Processing (ICCSP). Melmaruvathur, India; 2015. p. 757–61. https://doi.org/10.1109/ICCSP.2015.7322593.

  39. Hald J. Basic theory and properties of statistically optimized near-field acoustical holography. J Acoust Soc Am. 2009;125(4):2105–20. https://doi.org/10.1121/1.3079773.

    Article  Google Scholar 

  40. Sijtsma P. Experimental techniques for identification and characterization of noise sources. 2004.

  41. Petrica L. An evaluation of low-power microphone array sound source localization for deforestation detection. Appl Acoust. 2016;113:162–9. https://doi.org/10.1016/j.apacoust.2016.06.022.

    Article  Google Scholar 

  42. Ramos ALL, Holm S, Gudvangen S, Otterlei R. Delay-and-sum beamforming for direction of arrival estimation applied to gunshot acoustics. In: Proceedings of SPIE - The International Society for Optical Engineering. 2011:8019. https://doi.org/10.1117/12.886833.

  43. Moradshahi P, Chatrzarrin H, Goubran R. Cough sound discrimination in noisy environments using microphone array. Conf Rec - IEEE Instrument Measure Technol Conf. 2013;431–4. https://doi.org/10.1109/I2MTC.2013.6555454.

  44. Modir Shanechi M, Aarabi P. Structural analysis of multisensor arrays for speech separation applications. Proc SPIE - Int Soc Opt Eng. 2003;5099:327–34. https://doi.org/10.1117/12.488093.

    Article  Google Scholar 

  45. Wajid M, Kumar B, Goel A, Kumar A, Bahl R. Direction of arrival estimation with uniform linear array based on recurrent neural network. Proc IEEE Int Conf Signal Process Comput Control. 2019;361–5. https://doi.org/10.1109/ISPCC48220.2019.8988399.

  46. Gur B. Particle velocity gradient based acoustic mode beamforming for short linear vector sensor arrays. J Acoust Soc Am. 2014;135(6):3463–73. https://doi.org/10.1121/1.4876180.

    Article  Google Scholar 

  47. Koop L, Ehrenfried K. Microphone-array processing for wind-tunnel measurements with strong background noise. In: 14th AIAA/CEAS Aeroacoustics Conference (29th AIAA Aeroacoustics Conference). 2008. https://doi.org/10.2514/6.2008-2907.

  48. Ocker C, Pannert W. Imaging of broadband noise from rotating sources in uniform axial flow. In: 22nd AIAA/CEAS Aeroacoustics Conference, 2016. 2016. https://doi.org/10.2514/6.2016-2899.

  49. Kim S-M, Byun S-H, Kim K, Choi H-T, Lee C-M. Development and performance test of an underwater sound transmission system for an ROV. In: 2017 IEEE OES International Symposium on Underwater Technology, UT 2017. 2017. https://doi.org/10.1109/UT.2017.7890295.

  50. De Araujo FH, Pinto FADNC. Comparison between the spherical harmonics beamforming and the delay-and-sum beamforming. In: Proceedings of the INTER-NOISE 2016 - 45th International Congress and Exposition on Noise Control Engineering: Towards a Quieter Future. 2016. p. 277–87.

  51. Tiana-Roig E, Jacobsen F, Fernandez-Grande E. Beamforming with a circular array of microphones mounted on a rigid sphere (l). J Acoust Soc Am. 2011;130(3):1095–8. https://doi.org/10.1121/1.3621294.

    Article  Google Scholar 

  52. Kerscher M, Heilmann G, Puhle C, Krause R, Friebe C. Sound source localization on a fast rotating fan using rotational beamforming. In: INTER-NOISE 2017 - 46th International Congress and Exposition on Noise Control Engineering: Taming Noise and Moving Quiet. 2017.

  53. Yang Y, Chu Z, Shen L, Xu Z. Functional delay and sum beamforming for three-dimensional acoustic source identification with solid spherical arrays. J Sound Vib. 2016;373:340–59. https://doi.org/10.1016/j.jsv.2016.03.024.

    Article  Google Scholar 

  54. Li Y, Ho KC, Popescu M. Efficient source separation algorithms for acoustic fall detection using a microsoft kinect. IEEE Trans Biomed Eng. 2014;61(3):745–55. https://doi.org/10.1109/TBME.2013.2288783.

    Article  Google Scholar 

  55. Talmon R, Cohen I, Gannot S. Multichannel speech enhancement using convolutive transfer function approximation in reverberant environments. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2009. p. 3885–8. https://doi.org/10.1109/ICASSP.2009.4960476.

  56. Xia H-J, Ma Y-L, Liu Y-X. Analysis of the symmetry of the ambient noise and study of the noise reduction. Wuli Xuebao/Acta Phys Sin. 2016;65(14). https://doi.org/10.7498/aps.65.144302.

  57. Salom I, Celebic V, Milanovic M, Todorovic D, Prezelj J. An implementation of beamforming algorithm on FPGA platform with digital microphone array. 138th Audio Eng Soc Conv. 2015;2:995–1004.

  58. Bai L, Huang X. Observer-based beamforming algorithm for acoustic array signal processing. J Acoust Soc Am. 2011;130(6):3803–11. https://doi.org/10.1121/1.3658448.

    Article  Google Scholar 

  59. Lashi D, Quévy Q, Lemeire J. Optimizing microphone arrays for delay-and-sum beamforming using genetic algorithms. In: 2018 4th International Conference on Cloud Computing Technologies and Applications, Cloudtech. 2018. https://doi.org/10.1109/CloudTech.2018.8713331.

  60. Lauterbach A, Ehrenfried K, Koop L, Loose S. Procedure for the accurate phase calibration of a microphone array. In: 15th AIAA/CEAS Aeroacoustics Conference (30th AIAA Aeroacoustics Conference). 2009. https://doi.org/10.2514/6.2009-3122.

  61. Kates JM. Evaluation of hearing-aid array processing. IEEE ASSP Workshop Appl Signal Process Audio Acoust. 1995;4. https://doi.org/10.1109/ASPAA.1995.482902.

  62. Yardibi T, Bahr C, Zawodny N, Liu F, Cattafesta LN III, Li J. Uncertainty analysis of the standard delay-and-sum beamformer and array calibration. J Sound Vib. 2010;329(13):2654–82. https://doi.org/10.1016/j.jsv.2010.01.014.

    Article  Google Scholar 

  63. Malgoezar A, Snellen M, Simons D, Sijtsma P. Using global optimization methods for acoustic source localization. In: ICSV 2016 - 23rd International Congress on Sound and Vibration: From Ancient to Modern Acoustics. 2016.

  64. Chu Z, Yang Y, Shen L. Resolution and quantification accuracy enhancement of functional delay and sum beamforming for three-dimensional acoustic source identification with solid spherical arrays. Mech Syst Signal Process. 2017;88:274–89. https://doi.org/10.1016/j.ymssp.2016.11.027.

    Article  Google Scholar 

  65. Juricka M. Acoustic camera scanning as a detection of noise sources on small aircraft. Acta Avionica J. 2020;12–20. https://doi.org/10.35116/aa.2020.0002.

  66. Howell GP, Bradley AJ, McCormick MA, Brown JD. De-dopplerization and acoustic imaging of aircraft flyover noise measurements. J Sound Vib. 1986;105(1):151–67. https://doi.org/10.1016/0022-460X(86)90227-0.

    Article  Google Scholar 

  67. Bi Y, Feng X, Zhang Y. Optimized sonar broadband focused beamforming algorithm. Algorithms. 2019;12(2). https://doi.org/10.3390/a12020033.

  68. Bi Y, Wang Y-M, Wang Q. Research on dual optimized broadband beamforming algorithm. Binggong Xuebao/Acta Armamentarii. 2017;38(8):1563–71. https://doi.org/10.3969/j.issn.1000-1093.2017.08.014.

    Article  Google Scholar 

  69. Bao C, Jia L, Pan J. Use of robust capon beamformer for extracting audio signals. In: Acoustics 2019, Sound Decisions: Moving Forward with Acoustics - Proceedings of the Annual Conference of the Australian Acoustical Society. 2020.

  70. Somasundaram SD, Parsons NH. Evaluation of robust capon beamforming for passive sonar. IEEE J Ocean Eng. 2011;36(4):686–95. https://doi.org/10.1109/JOE.2011.2167374.

    Article  Google Scholar 

  71. Li J, Stoica P, Wang Z. On robust capon beamforming and diagonal loading. IEEE Trans Signal Process. 2003;51(7):1702–15. https://doi.org/10.1109/TSP.2003.812831.

    Article  Google Scholar 

  72. Bao C. Performance of time domain and time-frequency domain adaptive beamformers with moving sound sources. INTERNOISE 2014 - 43rd International Congress on Noise Control Engineering: Improving the World Through Noise Control. 2014.

  73. Frost OL. An algorithm for linearly constrained adaptive array processing. Proc IEEE. 1972;60(8):926–35. https://doi.org/10.1109/PROC.1972.8817.

    Article  Google Scholar 

  74. Azimi-Sadjadi MR, Pezeshki A, Scharf LL, Hohil ME. Wideband DOA estimation algorithms for multiple target detection and tracking using unattended acoustic sensors. 2004.

  75. Camargo HE, Burdisso RA, Ravetta PA, Smith AK. A comparison of beamforming processing techniques for low frequency noise source identification in mining equipment. ASME Int Mech Eng Congr Exposition Proc. 2010;15:205–11. https://doi.org/10.1115/IMECE2009-12194.

    Article  Google Scholar 

  76. Rindal OMH, Austeng A, Fatemi A, Rodriguez-Molares A. The effect of dynamic range alterations in the estimation of contrast. IEEE Trans Ultrason Ferroelectr Freq Control. 2019;66(7):1198–208. https://doi.org/10.1109/TUFFC.2019.2911267.

    Article  Google Scholar 

  77. He Y, Dong G, Zhang T, Wang B, Shen Z. A study on the correlation between vehicles interior noise and exterior aerodynamic noise sources. 2017;39(10):1192–7. https://doi.org/10.19562/j.chinasae.qcgc.2017.10.015.

    Article  Google Scholar 

  78. Chu Z, Zhao S, Yang Y, Yang Y. Deconvolution using clean-sc for acoustic source identification with spherical microphone arrays. J Sound Vib. 2019;440:161–73. https://doi.org/10.1016/j.jsv.2018.10.030.

    Article  Google Scholar 

  79. Wang Y, Yang C, Wang Y, Hu D. Fast deconvolution algorithm based on compressed focus grid points. Zhendong yu Chongji/J Vib Shock. 2022;41(6):250–5. https://doi.org/10.13465/j.cnki.jvs.2022.06.032.

    Article  Google Scholar 

  80. Legg M, Bradley S. Automatic 3D scanning surface generation for microphone array acoustic imaging. 2014;76:230–7. https://doi.org/10.1016/j.apacoust.2013.08.008.

  81. Baali H, Bouzerdoum A, Khelif A. Sparsity and nonnegativity constrained Krylov approach for direction of arrival estimation. ICASSP IEEE Int Conf Acoust Speech Signal Process - Proc. 2021;2021:4400–4. https://doi.org/10.1109/ICASSP39728.2021.9415040.

    Article  Google Scholar 

  82. Wu Y, He Y, Shen Z, Yang Z. Application of improved beamforming algorithm in sound source identification at wind tunnel. Tongji Daxue Xuebao/J Tongji Univ. 2019;47:20–5. https://doi.org/10.11908/j.issn.0253-374x.19707.

    Article  Google Scholar 

  83. Ma W, Liu X. Compression computational grid based on functional beamforming for acoustic source localization. Appl Acoust. 2018;134:75–87. https://doi.org/10.1016/j.apacoust.2018.01.006.

    Article  Google Scholar 

  84. Ravetta P, Burdisso R. Noise source localization and optimization of phased array results (lore). AIAA J. 2006;47. https://doi.org/10.2514/6.2006-2713.

  85. Ravetta P, Burdisso R, Ng W, Sijtsma P, Stoker R, Underbrink J, Dougherty R, Khorrami M. Noise source localization and optimization of phased-array results. AIAA J. 2009;47:2520–33. https://doi.org/10.2514/1.38073.

    Article  Google Scholar 

  86. Qayyum H, Ashraf M. Performance comparison of direction-of-arrival estimation algorithms for towed array sonar system. Commun Comput Info Sci. 2011;189(CCIS(PART 2)):509–17 . https://doi.org/10.1007/978-3-642-22410-2_44.

  87. Benesty J, Chen J, Huang Y. A generalized MVDR spectrum. Signal Process Lett IEEE. 2006;12:827–30. https://doi.org/10.1109/LSP.2005.859517.

    Article  Google Scholar 

  88. Defatta DJJGL, Hodkiss WS. Digital signal processing a system approach. 1st Edition. Wiley; 1988.

  89. Moallemi N, ShahbazPanahi S. Immersion ultrasonic array imaging using a new array spatial signature in different imaging algorithms. Conf Rec - Asilomar Conf Signals Syst Comput. 2014;2015:1558–61. https://doi.org/10.1109/ACSSC.2014.7094726.

    Article  Google Scholar 

  90. Sun JC, Shin CW, Ju HJ, Paik SK, Kang YJ. Measurement of the normal acoustic impedance using beamforming method. J Mech Sci Technol. 2009;23(8):2169–78. https://doi.org/10.1007/s12206-009-0435-z.

    Article  Google Scholar 

  91. Swingler DN, Walker RS. Line-array beamforming using linear prediction for aperture interpolation and extrapolation. IEEE Trans Acoust Speech Signal Process. 1989;37(1):16–30. https://doi.org/10.1109/29.17497.

    Article  Google Scholar 

  92. Liu C, Lv Y, Miao J, Shang H. Research on high resolution algorithm of sound source localization based on microphone array. In: ICSIDP 2019 - IEEE International Conference on Signal, Information and Data Processing. 2019. https://doi.org/10.1109/ICSIDP47821.2019.9173224.

  93. Zhang Y, Chen J, Zhou N, Luo L, Sheng G. Joint acoustic source localization algorithm based on summation and music algorithm for power equipment in substations. Proc - 2020 5th Asia Conf Power Electr Eng, ACPEE 2020. 2020. p. 26–31. https://doi.org/10.1109/ACPEE48638.2020.9136575.

  94. Fan W, Zhang X, Jiang B. A new passive sonar bearing estimation algorithm combined with blind source separation. In: 3rd International Joint Conference on Computational Sciences and Optimization, CSO 2010: Theoretical Development and Engineering Practice, vol. 1, 2010. p. 15–8. https://doi.org/10.1109/CSO.2010.201.

  95. Kassis C, Picheral J. Wideband zero-forcing music for aeroacoustic sources localization. Eur Signal Process Conf. 2012;2283–7.

  96. Xiao H, Shao H-Z, Peng Q-C. A robust sound source localization approach for microphone array with model errors. IEICE Trans Fundamentals Electron Commun Comput Sci. 2008;E91–A(8):2062–7. https://doi.org/10.1093/ietfec/e91-a.8.2062.

    Article  Google Scholar 

  97. Sheikh MA, Kumar L, Beg MT. Circular microphone array based stethoscope for radial filtering of body sounds. In: 2019 International Conference on Power Electronics, Control and Automation, ICPECA 2019 - Proceedings. 2019. https://doi.org/10.1109/ICPECA47973.2019.8975663.

  98. Bai MR, Lee J. Industrial noise source identification by using an acoustic beamforming system. J Vib Acoust Trans ASME. 1998;120(2):426–33. https://doi.org/10.1115/1.2893847.

    Article  Google Scholar 

Download references

Funding

This work was supported by the funding from the PhD Food and sustainable development—F.A.I. Lab Project (Candidate ID: 10460)—dedicated to the study of the vibro-acoustic impact of agricultural machinery on the environment and on humans.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, L.F.; data curation, F.A. and M.B.; formal analysis, F.A., A.M., M.B., L.F.; funding acquisition, F.F. and G.L.; investigation, F.A. and L.F.; methodology, L.F.; project administration, F.F. and G.L.; resources, F.F. and G.L.; supervision, L.F., F.F., and G.L.; validation, L.F.; visualization, G.L.; writing—original draft, F.A., A.M., M.B., and L.F.; writing—review and editing, L.F., F.A., A.M., F.F., and G.L. All authors have read and agreed to the published version of the manuscript.

Corresponding authors

Correspondence to Gaetano Licitra or Luca Fredianelli.

Ethics declarations

Conflict of Interest

The authors declare no competing interests.

Human and Animal Rights and Informed Consent

This article does not contain any studies with human or animal subjects performed by any of the authors.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix. Basic Concepts of Beamforming

Appendix. Basic Concepts of Beamforming

A.1 Introduction

In acoustics, the computation of acoustic pressure in some points of the space, knowing position and strength of the sources, is called direct problem. Beamforming deals with the inverse problem, namely the estimation of position and strength of the sources, starting from measurements taken in some control points of the space. When starting to deal with this kind of problem, an operator could be brought to use a single microphone, as in other kind of acoustic measurements is done. Anyway, a sound measurement performed with a single microphone does not allow the spatial localization of the emitter, as a single microphone can only record the sound pressure field. The response of a microphone, with respect to the arrival angle of the sound, depends on the directivity of the transducer under analysis. The main typologies of directivity are omnidirectional (if the response is equal for every arrival angle of the sound), bidirectional (if 0° and 180° are favored), unidirectional (if only frontal direction is favored), and superdirectional (if the favored direction is frontal with a narrower opening angle with respect to unidirectional one). A single microphone usually shows a very broad polar response. The only exception is represented by superdirectional microphones, used to identify the exact localization of the source. Even though, the great directivity index shown by this typology of microphones is not sufficient to build functional and reliable methods for source localization. The reason is that it would be necessary to look for the maximum response angle and then move the microphone according to this information. It is straightforward to understand that the resulting technique would be slow and inefficient. This awareness brings to the idea of building an array of microphones in order to exploit the delays between microphones and then finding the arrival angle.

Consider the simplest explanation of the problem before moving on to a more articulated treatment. Assuming sources at great distance from the array, the polar angle can be neglected and only the azimuthal one can be considered. This brings to a simplification of the problem into a linear array. Anyway, the implied assumption is that the source is still enough distant from the array to allow to consider the wave front as a plane wave front. The unidimensional treatment of this challenge is showed in Fig. 3.

Fig. 3
figure 3

Scheme of the interaction between a plane wave and a linear microphone array

The plane wave front impacts the different microphones in different times because of the inclination of the wave front with respect to the array. In particular, when the wave front impacts the first microphone, the sound perturbation is still distant from the second microphone for a distance equal to l. Calling \(\alpha\) the angle between the wave front and the microphone array, which is the unknown quantity of the problem, and calling d the distance between two consequent microphones of the array, which is known, the distance l is equal to \(l = d \sin \alpha\). Furthermore, the distance l is traveled by the wave front in a time equal to \(\Delta t = l/c\), where c is the sound speed. Therefore, the delay between the first and the second microphone can be computed, using known quantities, as:

$$\begin{aligned} \Delta t = \frac{d \sin \alpha }{c} \end{aligned}$$
(70)

Straightforwardly, the delay between the first microphone and the other ones from the third onwards is a multiple of the time interval computed in Eq. (70). In this way, measuring the delays with which the peak is registered in two consequent microphones of the array, which are equal to \(\Delta t\), is possible to compute the azimuthal arrival angle, using the inverse of Eq. (70)

$$\begin{aligned} \alpha = \arcsin \bigg ( \frac{c \Delta t}{d} \bigg ) \end{aligned}$$
(71)

Up to this point, the treatment has been restricted to a single angle and a single frequency problem, just to explain in a simple way the task. For nearest sources, the azimuthal angle is not sufficient and it is necessary to introduce the polar angle, switching to a bidimensional planar array. These few lines aim to introduce the problem and explain why a bidimensional array is required for an optimal resolution of the beamforming task. Another remarkable detail regards the disposition in space of the microphones. Indeed, it is common to see bidimensional microphone arrays with irregular shape. The reason lies in the will to avoid spatial aliasing, that is, a sampling error due to inadequate distance of the microphones for some wavelengths. It is the same effect occurring in the time domain when the sampling frequency is not adequate. It has been found out that severe sampling problems occur when periodic arrays are employed, and that these problems can be solved using arrays with irregular shape [30•].

A.2 Mathematical Formulation of Beamforming

After this brief and simple introduction to the problem, it is now possible to switch to a more advanced treatment of the topic, which introduces the basic concepts of beamforming. An exhaustive treatment of the subject can be found in Allen et al. [32]. Suppose having an array composed by N microphones, so that a generic microphone is identified with the letter n, and a grid of points in the space to be investigated because they can contain the sources. A generic grid point is identified with \(x_s\). As input, beamforming algorithms take sound pressure signals acquired by the array of microphones. Each signal acquired by a microphone is thought as the real signal emitted by a source, modified by the path traveled. To reflect this idea, for each point in the space, the signal acquired by a microphone is modeled as the product between a propagation factor \(C_n(x_s)\) and a source function \(S(x_s,t)\). The propagation factor is equal to

$$\begin{aligned} C_n(x_s) = \frac{e^{-i \omega \sigma (x_n,x_s)}}{D(x_n,x_s)} \end{aligned}$$
(72)

where the numerator accounts for the delay while the denominator for the damping of the signal during the path. The source function \(S(x_s,t)\), instead, represents the distribution of sources in space. The basic assumption is that sources associated with different points in space are uncorrelated, namely

$$\begin{aligned} \langle S^*(x'_s,t)S(x_s,t) \rangle = q(x_s)\delta (x'_s-x_s) \end{aligned}$$
(73)

where \(q(x_s)\) is the mean square source strength at \(x_s\) and \(\langle \rangle\) is the time average defined by

$$\begin{aligned} \langle \,f(t) \rangle = \frac{1}{T} \int _0^T f(t) dt \;. \end{aligned}$$
(74)

For an extended source, the total signal \(u_n(t)\) at each microphone is modeled as

$$\begin{aligned} u_n(t) = \int C_n(x_s) S(x_s,t) d^3x_s + E_n(t) \end{aligned}$$
(75)

where \(E_n(t)\) is the background noise. In the inverse problem framework, the signal \(u_n(t)\) acquired by each microphone is the known variable, while source function \(S(x_s,t)\) is the unknown variable to be determined. The signal acquired by each microphone is both represented by concrete data and by a theoretical model (75), and the beamforming goal is to estimate the function \(S(x_s,t)\) extrapolating it from (75). Collecting all the \(u_n(t)\), \(C_n(x_s)\), and \(E_n(t)\) in vectors of dimension N, \(\vec {u}(t)\), \(\vec {C}(x_s)\), and \(\vec {E}(t)\), it is possible to write Eq. (75) in vectorial notation as

$$\begin{aligned} \vec {u}(t) = \int \vec {C}(x_s) S(x_s,t) d^3x_s + \vec {E}(t) \; . \end{aligned}$$
(76)

At this point, a very important tool is introduced, that is, the set of microphone weight vectors \(\vec {w}(x_b)\), defined for every potential source of interest. The task assigned to these vectors is to delay the signal of each microphone of the array, in order to “steer” the array towards the direction which is supposed to be the direction of arrival of the signal. For this reason, they are also called “steering vectors.” By definition, they are normalized, i.e.,

$$\begin{aligned} \vec {w}^{\dag }(x_b)\vec {w}(x_b)=1 \;. \end{aligned}$$
(77)

A “beamforming value” related to the measured signal \(\vec {u}(t)\) and the point \(x_b\) is computed using the steering vectors. Namely, the scalar product between the signal \(\vec {u}(t)\) and the steering vector \(\vec {w}(x_b)\) is computed, in order to obtain the “steered” version of \(\vec {u}(t)\) towards the direction \(x_b\). From this product, the “beamforming value” of interest is obtained and plotted for every potential point. In mathematical language, this sentence is equal to

$$\begin{aligned} b(x_b) = \langle |\vec {w}^{\dag }(x_b) \vec {u}(t)|^2 \rangle \end{aligned}$$
(78)

Exploiting Eqs. (73) and (77), it is possible to show that, from a theoretical point of view, it is possible to show that for a direction \(x_b\) containing an actual source, \(b(x_b)\) is equal to the time-average square source strength q, multiplied by the sum of the squares of the propagation factors. Namely

$$\begin{aligned} b(x_b) = q \; \vec {C}^{\dag }(x_b)\vec {C}(x_b) = q \sum _{n=1}^N |C_n(x_b)|^2 \end{aligned}$$
(79)

The expression \(q \; |C_n(x_b)|^2\) can be thought as the time-average square pressure experienced by a generic microphone n due to the signal produced by the source in \(x_b\), because it is equal to the time-average square source strength q modified by the path traveled until reaching microphone n. Therefore, Eq. (79) shows that, for an actual source in \(x_b\), \(b(x_b)\) is equal to the sum over the array of these time-average square pressures due to the presence of the source in \(x_b\). Equation (78) can be expanded, obtaining

$$\begin{aligned} b(x_b) = \langle |\vec {w}^{\dag }(x_b) \vec {u}(t)|^2 \rangle = \langle \vec {w}^{\dag }(x_b) \vec {u}(t) \vec {u}^{\dag }(t) \vec {w}(x_b) \rangle \end{aligned}$$
(80)

The product \(\langle \vec {u}(t) \vec {u}^{\dag }(t) \rangle\) is equal to the cross-spectral matrix (CSM). CSM is, in the frequency domain, the analogue of the cross-correlation in the time domain. It can be finally obtained

$$\begin{aligned} b(x_b) = \vec {w}^{\dag }(x_b) \; CSM \; \vec {w}(x_b) \end{aligned}$$
(81)

which is the operative equation of beamforming. The goal of the simplest version of beamforming is to find the set of weight vectors \(\vec {w}(x_b)\) which allows to get the maximum value of \(b(x_b)\) for a specific direction \(x_b\), because this would mean that in \(x_b\), an actual source is present. In other words, it is necessary to find the set of \(\vec {w}(x_b)\) which is able to “steer” the array towards the correct direction. As stated before, the CSM is obtained time-averaging the scalar product between real data collected by microphones and their conjugate transpose. Therefore, the last unknown variables left to compute Eq. (81) are the steering vectors \(\vec {w}(x_b)\). Up to this point, an explicit form of \(\vec {w}(x_b)\) has not been given. This is the last missing information to complete the theoretical formulation of the problem. Actually, from Eqs. (76) and (77), the choice of the explicit form of the steering vectors is straightforward. Indeed, if, in order to maximize \(b(x_b)\), the goal is to maximize the expression

$$\begin{aligned} \vec {w}^{\dag }(x_b) \vec {u}(t) = \int \vec {w}^{\dag }(x_b) \vec {C}(x_s) S(x_s,t) d^3x_s + \vec {E}(t) \end{aligned}$$
(82)

it follows that the scalar product \(\vec {w}^{\dag }(x_b) \vec {C}(x_s)\) must be maximized, and this happens when \(\vec {w}^{\dag }(x_b)\) and \(\vec {C}(x_s)\) are parallel. Given this constraint and also the constraint shown in Eq. (77), it is straightforward to choose the following form of the steering vectors

$$\begin{aligned} \vec {w}(x_b) = \frac{\vec {C}(x_b)}{||\vec {C}(x_b)||} \end{aligned}$$
(83)

Given this expression for the steering vectors, the task of conventional beamforming procedure is to compute the CSM using the data acquired by the microphones and then to find the correct direction \(x_b\) wherewith obtaining the maximum value of \(b(x_b)\) using Eq. (81). Of course, the most advanced algorithm functioning extends far beyond this simple mathematical formulation and how it is fully shown in the “Mathematical Formulations of Beamforming Algorithms” section. However, it is useful to see this basic treatment of the problem, both because it is the basis of more advanced algorithms and also because it explains the basic idea behind beamforming, from which the current research field originated.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Licitra, G., Artuso, F., Bernardini, M. et al. Acoustic Beamforming Algorithms and Their Applications in Environmental Noise. Curr Pollution Rep 9, 486–509 (2023). https://doi.org/10.1007/s40726-023-00264-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40726-023-00264-9

Keywords

Navigation