Advertisement

A decentralised approach to scene completion using distributed feature hashgram

  • R. TalatEmail author
  • M. Muzammal
  • R. Shan
Article
  • 15 Downloads

Abstract

Scene completion is automated image reconstruction in a plausible way. Typically, semantically valid images are retrieved by pair-wise comparison and subsequently a completion candidate is selected. The primary challenge in scene completion is the computational cost of pair-wise comparisons which increases geometrically with the increase in the number of images. Another challenge is a large number of incoming completion requests which are to be completed on a centralised server. In this work, we propose a decentralised scene completion system using distributed feature hashgram. The system comprises of two principal components, (i) a deep signature-based decentralised image retrieval component that retrieves semantically valid images by way of signature comparison, and (ii) a fog computing enabled scene completion algorithm which finds optimal patches from the most suitable retrieved image to fill in the missing parts using graph-cut technique. A detailed experimental study on LabelMe dataset is performed to evaluate the quality of the solution. Another challenge in scene completion is the absence of ground truth. We propose an evaluation method to evaluate the image completion in the absence of ground truth. The results demonstrate the novelty of the system and the applicability of the solution for large image data repositories.

Keywords

Decentralised scene completion Fog computing Image synthesis 

Notes

References

  1. 1.
    Abbasi F, Muzammal M, Qu Q (2018) A decentralized approach for negative link prediction in large graphs. In: 2018 IEEE international conference on data mining workshops (ICDMW). IEEE, pp 144–150Google Scholar
  2. 2.
    Andoni A, Indyk P (2006) Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In: FOCS’06. 47th annual IEEE symposium on foundations of computer science, 2006. IEEE, pp 459–468Google Scholar
  3. 3.
    Boykov Y, Veksler O, Zabih R (2001) Fast approximate energy minimization via graph cuts. IEEE Trans Pattern Anal Mach Intell 23(11):1222–1239.  https://doi.org/10.1109/34.969114 CrossRefGoogle Scholar
  4. 4.
    Chum O, Matas J (2010) Large-scale discovery of spatially related images. IEEE Trans Pattern Anal Mach Intell 32(2):371–377.  https://doi.org/10.1109/TPAMI.2009.166 CrossRefGoogle Scholar
  5. 5.
    Chum O, Philbin J, Zisserman A (2008) Near duplicate image detection: min-hash and tf-idf weighting. In: Proceedings of the British machine vision conference 2008, Leeds September 2008.  https://doi.org/10.5244/C.22.50, pp 1–10
  6. 6.
    Criminisi A, Pérez P, Toyama K (2003) Object removal by exemplar-based inpainting. In: 2003 IEEE computer society conference on computer vision and pattern recognition (CVPR 2003), 16–22 june 2003, Madison, WI, USA, pp 721–728.  https://doi.org/10.1109/CVPR.2003.1211538
  7. 7.
    Demir U, Unal G (2018) Patch-based image inpainting with generative adversarial networks. arXiv preprint arXiv:180307422
  8. 8.
    Ding D, Ram S, Rodriguez JJ (2018) Perceptually aware image inpainting. Pattern Recogn 83:174–184CrossRefGoogle Scholar
  9. 9.
    Erin Liong V, Lu J, Wang G, Moulin P, Zhou J (2015) Deep hashing for compact binary codes learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2475–2483Google Scholar
  10. 10.
    Fan Q, Zhang L (2018) A novel patch matching algorithm for exemplar-based image inpainting. Multimedia Tools and Applications 77:10807.  https://doi.org/10.1007/s11042-017-5077-z CrossRefGoogle Scholar
  11. 11.
    Fedorov V, Arias P, Facciolo G, Ballester C (2016) Affine invariant self-similarity for exemplar-based inpainting. In: VISAPPGoogle Scholar
  12. 12.
    Fergus R, Perona P, Zisserman A (2003) Object class recognition by unsupervised scale-invariant learning. In: Proceedings. 2003 IEEE computer society conference on computer vision and pattern recognition, 2003, vol 2. IEEE, pp II–IIGoogle Scholar
  13. 13.
    Filali J, Zghal H, Martinet J (2019) Ontology and hmax features-based image classification using merged classifiers. In: International conference on computer vision theory and applications 2019 (VISAPP’19)Google Scholar
  14. 14.
    Gong Y, Lazebnik S (2011) Iterative quantization: a procrustean approach to learning binary codes. In: 2011 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 817–824Google Scholar
  15. 15.
    Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672– 2680Google Scholar
  16. 16.
    Grauman K, Darrell T (2005) The pyramid match kernel: discriminative classification with sets of image features. In: ICCV 2005. Tenth IEEE international conference on computer vision, 2005, vol 2. IEEE, pp 1458–1465Google Scholar
  17. 17.
    Hays J, Efros AA (2007) Scene completion using millions of photographs. ACM Trans Graph 26(3):4.  https://doi.org/10.1145/1276377.1276382 CrossRefGoogle Scholar
  18. 18.
    He K, Sun J (2012) Statistics of patch offsets for image completion. In: Computer vision–ECCV 2012. Springer, pp 16–29Google Scholar
  19. 19.
    He K, Wen F, Sun J (2013) K-means hashing: an affinity-preserving quantization method for learning binary compact codes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2938–2945Google Scholar
  20. 20.
    Heo JP, Lee Y, He J, Chang SF, Yoon SE (2012) Spherical hashing. In: 2012 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 2957–2964Google Scholar
  21. 21.
    Hore S, Chatterjee S, Chakraborty S, Shaw RK (2018) Analysis of different feature description algorithm in object recognition. In: Computer vision: concepts, methodologies, tools, and applications, IGI global, pp 601–635Google Scholar
  22. 22.
    Janardhana Rao B, Chakrapani Y, Srinivas Kumar S (2018) Image inpainting method with improved patch priority and patch selection. IETE J Educ 59(1):26–34CrossRefGoogle Scholar
  23. 23.
    Jin D, Bai X (2018) Patch-sparsity-based image inpainting through facet deduced directional derivative. IEEE Transactions on Circuits and Systems for Video Technology 29(5):1310–1324CrossRefGoogle Scholar
  24. 24.
    Jo Y, Park J (2019) Sc-fegan: face editing generative adversarial network with user’s sketch and color. arXiv:190206838
  25. 25.
    Leskovec J, Rajaraman A, Ullman JD (2014) Mining of massive datasets. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  26. 26.
    Mahmud R, Kotagiri R, Buyya R (2018) Fog computing: a taxonomy, survey and future directions. In: Internet of everything. Springer, pp 103–130Google Scholar
  27. 27.
    Muzammal M (2011) Mining sequential patterns from probabilistic databases by pattern-growth. In: British national conference on databases. Springer, pp 118–127Google Scholar
  28. 28.
    Muzammal M, Gohar M, Rahman AU, Qu Q, Ahmad A, Jeon G (2017) Trajectory mining using uncertain sensor data. IEEE Access 6:4895–4903CrossRefGoogle Scholar
  29. 29.
    Muzammal M, Talat R, Sodhro AH, Pirbhulal S (2019) A multi-sensor data fusion enabled ensemble approach for medical data from body sensor networks. Information Fusion 53, 2020:155–164.  https://doi.org/10.1016/j.inffus.2019.06.021 Google Scholar
  30. 30.
    Oliva A, Torralba A (2001) Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis 42(3):145–175CrossRefGoogle Scholar
  31. 31.
    Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2536–2544Google Scholar
  32. 32.
    Pelka O, Nensa F, Friedrich CM (2018) Adopting semantic information of grayscale radiographs for image classification and retrieval. In: BIOIMAGING, pp 179–187Google Scholar
  33. 33.
    Pérez P, Gangnet M, Blake A (2003) Poisson image editing. ACM Trans Graph 22(3):313–318.  https://doi.org/10.1145/882262.882269 CrossRefGoogle Scholar
  34. 34.
    Qu Q, Nurgaliev I, Muzammal M, Jensen CS, Fan J (2019) On spatio-temporal blockchain query processing. Futur Gener Comput Syst 98:208–218CrossRefGoogle Scholar
  35. 35.
    Russell BC, Torralba A, Murphy KP, Freeman WT (2008) Labelme: a database and web-based tool for image annotation. Int J Comput Vis 77(1-3):157–173CrossRefGoogle Scholar
  36. 36.
    Smeulders AW, Worring M, Santini S, Gupta A, Jain R (2000) Content-based image retrieval at the end of the early years. IEEE Trans Pattern Anal Mach Intell 22(12):1349–1380CrossRefGoogle Scholar
  37. 37.
    Vogel C, Knöbelreiter P, Pock T (2018) Learning energy based inpainting for optical flow. arXiv:181103721
  38. 38.
    Wang H, Jiang L, Liang R, Li XX (2017) Exemplar-based image inpainting using structure consistent patch matching. Neurocomputing 269:90–96CrossRefGoogle Scholar
  39. 39.
    Wang J, Kumar S, Chang SF (2012) Semi-supervised hashing for large-scale search. IEEE Transactions on Pattern Analysis & Machine Intelligence 34(12):2393–2406.  https://doi.org/10.1109/TPAMI.2012.48 CrossRefGoogle Scholar
  40. 40.
    Weiss Y, Torralba A, Fergus R (2009) Spectral hashing. In: Advances in neural information processing systems, pp 1753–1760Google Scholar
  41. 41.
    Xiao M, Li G, Xie L, Peng L, Chen Q (2018) Exemplar-based image completion using image depth information. PloS one 13(9):e0200,404CrossRefGoogle Scholar
  42. 42.
    Yeh RA, Lim TY, Chen C, Schwing AG, Hasegawa-Johnson M, Do M (2018) Image restoration with deep generative models. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 6772–6776Google Scholar
  43. 43.
    Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018a) Free-form image inpainting with gated convolution. arXiv:180603589
  44. 44.
    Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018b) Generative image inpainting with contextual attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5505–5514Google Scholar
  45. 45.
    Yuheng S, Hao Y (2018) Image inpainting based on a novel criminisi algorithm. arXiv:180804121
  46. 46.
    Zhao Y, Price B, Cohen S, Gurari D (2019) Guided image inpainting: replacing an image region by pulling content from another image. In: 2019 IEEE winter conference on applications of computer vision (WACV). IEEE, pp 1514–1523Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceBahria UniversityIslamabadPakistan
  2. 2.Department of Computer ScienceCOMSATS UniversityAbbottabad CampusPakistan

Personalised recommendations