Detection and analysis of the objects in a frame or a sequence of frames (video) can be used to solve a number of problems in various fields, including the field of fire behaviour and risk. A quantitative understanding of the short distance spotting dynamics, namely the firebrand density distribution within a distance from the fire front and how distinct fires coalesce in a highly turbulent environment, is still lacking. To address this, a custom software was developed in order to detect the location and the number of flying firebrands in a thermal image then determine the temperature and sizes of each firebrand. The software consists of two modules, the detector and the tracker. The detector determines the location of the firebrands in the frame, and the tracker compares the firebrand in different frames and determines the identification number of each firebrand. Comparison of the calculated results with the data obtained by the independent experts and experimental data showed that the maximum relative error does not exceed 12% for the low and medium number of firebrands in the frame (less than 30) and software agrees well with experimental observations for firebrands > 20 × 10−5 m. It was found that fireline intensity below 12,590 kW m−1 does not change significantly 2D firebrand flux for firebrands bigger than 20 × 10−5 m, while occasional crowning can increase the firebrand flux in several times. The developed software allowed us to analyse the thermograms obtained during the field experiments and to measure the velocities, sizes and temperatures of the firebrands. It will help to better understand of how the firebrands can ignite the surrounding fuel beds and could be an important tool in investigating fire propagation in communities.
Wildland and structural firebrands Firebrand detection Firebrand tracking
This is a preview of subscription content, log in to check access.
This work was supported by the Russian Foundation for Basic Research (Project #18-07-00548), the Tomsk State University Academic D.I. Mendeleev Fund Program and the Bushfire and Natural Hazard Cooperative Research Centre.
Magidimisha E, Griffith DJ (2017) Remote optical observations of actively burning biomass fires using potassium line spectral emission. In: Proceedings of SPIE—the international society for optical engineeringGoogle Scholar
Heisele B, Ho P, Wu J, Poggio T (2003) Face recognition: component-based versus global approaches. Comput Vis Image Underst 91:6–21CrossRefGoogle Scholar
Ko BC, Cheong K-H, Nam J-Y (2009) Fire detection based on vision sensor and support vector machines. Fire Saf J 44:322–329CrossRefGoogle Scholar
Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp I511–I518Google Scholar
Huttenlocher DP, Noh JJ, Rucklidge WJ (1993) Tracking non-rigid objects in complex scenes. In: 1993 IEEE 4th international conference on computer vision, pp 93–101Google Scholar
Li B, Chellappa R, Zheng Q, Der SZ (2001) Model-based temporal object verification using video. IEEE Trans Image Process 10:897–908CrossRefGoogle Scholar
Kang J, Cohen I, Medioni G (2004) Object reacquisition using invariant appearance model. In: Proceedings—international conference on pattern recognition, pp 759–762Google Scholar
Yilmaz A, Li X, Shah M (2004) Contour-based object tracking with occlusion handling in video acquired using mobile cameras. IEEE Trans Pattern Anal Mach Intell 26:1531–1536CrossRefGoogle Scholar
Chen Y, Rui Y, Huang TS (2001) JPDAF based HMM for real-time contour tracking. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp I543–I550Google Scholar
Davis JW, Sharma V (2007) Background-subtraction using contour-based fusion of thermal and visible imagery. Comput Vis Image Underst 106:162–182CrossRefGoogle Scholar
Sobral A, Vacavant A (2014) A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput Vis Image Underst 122:4–21CrossRefGoogle Scholar
Stauffer C, Grimson WEL (1999) Adaptive background mixture models for real-time tracking. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 246–252Google Scholar
Zivkovic Z, Van Der Heijden F (2006) Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit Lett 27:773–780CrossRefGoogle Scholar
Park S, Aggarwal JK (2006) Simultaneous tracking of multiple body parts of interacting persons. Comput Vis Image Underst 102:1–21CrossRefGoogle Scholar