Automatic façade recovery from single nighttime image
Nighttime images are difficult to process due to insufficient brightness, lots of noise, and lack of details. Therefore, they are always removed from time-lapsed image analysis. It is interesting that nighttime images have a unique and wonderful building features that have robust and salient lighting cues from human activities. Lighting variation depicts both the statistical and individual habitation, and it has an inherent man-made repetitive structure from architectural theory. Inspired by this, we propose an automatic nighttime façade recovery method that exploits the lattice structures of window lighting. First, a simple but efficient classification method is employed to determine the salient bright regions, which may be lit windows. Then we groupwindows into multiple lattice proposals with respect to façades by patch matching, followed by greedily removing overlapping lattices. Using the horizon constraint, we solve the ambiguous proposals problem and obtain the correct orientation. Finally, we complete the generated façades by filling in the missing windows. This method is well suited for use in urban environments, and the results can be used as a good single-view compensation method for daytime images. The method also acts as a semantic input to other learning-based 3D image reconstruction techniques. The experiment demonstrates that our method works well in nighttime image datasets, and we obtain a high lattice detection rate of 82.1% of 82 challenging images with a low mean orientation error of 12.1 ± 4.5 degrees.
Keywordsfaçade recovery nighttime images lattice detection
Unable to display preview. Download preview PDF.
We gratefully acknowledge the proofreading of Doctor Yuehua Wang. This work was supported by the National High-tech R&D Program (2015AA016403), the National Natural Science Foundation of China (Grant Nos. 61572061, 61472020, 61502020), and the China Postdoctoral Science Foundation (2013M540039).
- 2.Gonzalez R. C, Woods R. E. Digital Image Processing. New Jersey: Prentice Hall, 2008Google Scholar
- 3.Durand F, Dorsey J. Fast bilateral filtering for the display of high dynamic range images. In: Proceedings of the 29th International Conference and Exhibition on Computer Graphics and Interactive Techniques. 2002, 257–266Google Scholar
- 4.Rao Y, Chen L. A survey of video enhancement techniques. Journal of Information Hiding and Multimedia Signal Processing, 2012, 3(1): 71–99Google Scholar
- 5.Raskar R, llie A, Yu J. Image fusion for context enhancement and video surrealism. In: Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering. 2004, 85–152Google Scholar
- 6.Cai Y, Huang K, Tan T, Wang Y. In: Proceedings of the 13rd International Conference on Pattern Recognition. 2006, 980–983Google Scholar
- 7.Dong X, Wang G, Pang Y, Li W, Wen J, Meng W, Lu Y. Fast efficient algorithm for enhancement of low lighting video. In: Proceedings of International Conference on Multimedia and Expo. 2011, 1–6Google Scholar
- 8.Micusik B, Wildenauer H, Kosecka J. Detection and matching of rectilinear structures. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2008, 1–7Google Scholar
- 9.Kosecka J, Zhang W. Extraction, matching, and pose recovery based on dominant rectangular structures. In: Proceedings of International Workshop on Higher-Level Knowledge in 3D Modeling and Motion Analysis. 2003Google Scholar
- 10.David L, Martial H, Takeo K. Geometric reasoning for single image structure recovery. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2009, 2136–2143Google Scholar
- 11.Wu C, Frahm J, Pollefeys M. Detecting large repetitive structures with salient boundaries. In: Proceedings of European Conference on Computer Vision, 2010, 142–155Google Scholar
- 12.Schindler G, Krishnamurthy P, Lublinerman R, Liu Y, Dellaert F. Detecting and matching repeated patterns for automatic geo-tagging in urban environments. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2008, 1–7Google Scholar
- 13.Hays J, Leordeanu M, Efros A. A, Liu Y. Discovering texture regularity as a higher-order correspondence problem. In: Proceedings of European Conference on Computer Vision. 2006, 522–535Google Scholar
- 15.Park M, Brocklehurst K, Collins R T, Liu Y. Translation-Symmetry-Based Perceptual Grouping with Applications to Urban Scenes. In: Proceedings of Asian Conference on Computer Vision. 2010, 329–342Google Scholar
- 16.Liu S, Ng T T, Sunkavalli K, Do M N, Shechtman E, Carr N. PatchMatch-based automatic lattice detection for near-regular textures. In: Proceedings of IEEE International Conference on Computer Vision. 2015, 181–189Google Scholar
- 17.Mobahi H, Zhou Z, Yang A Y, Ma Y. Holistic 3d reconstruction of urban structures from low-rank textures. In: Proceedings of IEEE International Conference on Computer Vision Workshops. 2011, 593–600Google Scholar
- 19.Liu J, Liu Y. Local regularity-driven city-scale facade detection from aerial images. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2014, 3778–3785Google Scholar
- 21.Forssen P E, Lowe D G. Shape descriptors for maximally stable extremal regions. In: Proceedings of IEEE International Conference on Computer Vision. 2007, 1–8Google Scholar
- 23.Hoiem D, Efros A A, Hebert M. Geometric context from a single image. In: Proceedings of IEEE International Conference on Computer Vision. 2005, 654–661Google Scholar