Advertisement

LabNet: An Image Repository for Virtual Science Laboratories

  • Ifeoluwatayo A. IgeEmail author
  • Bolanle F. Oladejo
Conference paper
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 558)

Abstract

There has been recent research on image and shape storage and retrieval. Several image/shape repositories and databases of large datasets have existed in literature. However, it can be said that these repositories have generic image data content as most of them are English based images of the general world. Since they do not focus on specific field of interest while populating them, there is a high probability that they may not have a sufficient coverage for images and shapes related to specific domains or fields such as high school science-oriented images and shapes. Hence, we develop ‘LabNet’; an image repository for high school science which contains images of high school science-related subjects and laboratory courses. We use Canny’s algorithm for edge detection of objects from crawled images; and then perform morphological operation algorithms for segmentation and extraction of object images. We state that our object image does not have any background and can be utilized for scene modelling and synthesis. LabNet can also be useful for high school science-based research as well as an educational tool for elementary science-based classes and laboratory exercises.

Keywords

Canny edge detection Morphological operations Image segmentation Shape repository Science laboratory 

References

  1. 1.
    Wu, Z., et al.: 3D ShapeNets: a deep representation for volumetric shapes. In: Proceedings of CVPR (2015)Google Scholar
  2. 2.
    Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository. In: Proceedings of CVPR. arXiv: 1512.03012 (2015)Google Scholar
  3. 3.
    Kumar, N., et al.: Leafsnap: a computer vision system for automatic plant species identification. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, pp. 502–516. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33709-3_36CrossRefGoogle Scholar
  4. 4.
    Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T.A.: ScanNet: richly annotated 3D reconstructions of indoor scenes. CVPR, pp. 5828–5839 (2017)Google Scholar
  5. 5.
    Dewan, P.: Words versus pictures: leveraging the research on visual communication. Partnersh.: Can. J. Libr. Inf. Pract. Res. 10(1), 1–10 (2015)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Ricanek, K., Tesafaye, T.: MORPH: a longitudinal image database of normal adult age-progression. In: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition, USA, pp. 341–345 (2006)Google Scholar
  7. 7.
    Sharma, P., Reilly, R.: A color face image database for benchmarking of automatic face detection algorithms. In: Proceedings of 4th EURASIP Conference on Video/Image Processing and Multimedia Communications, 2–5 July 2003Google Scholar
  8. 8.
    Dieng, J., Dong, W., Socher, R., Li, L., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 20–25 June 2009Google Scholar
  9. 9.
    Russell, B., Torralba, A.: Building a database of 3D scenes from user annotations. In: Proceedings of CVPR (2009)Google Scholar
  10. 10.
    Lenc, L., Král, P.: Unconstrained facial images: database for face recognition under real-world conditions. In: Lagunas, O.P., Alcántara, O.H., Figueroa, G.A. (eds.) MICAI 2015. LNCS (LNAI), vol. 9414, pp. 349–361. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-27101-9_26CrossRefGoogle Scholar
  11. 11.
    Zhou, B., Khosla, A., Lapedriza, A., Torralba, A., Oliva, A.: Places: an image database for deep scene understanding. In: Proceedings of CoRR, abs/1610.02055 (2016)Google Scholar
  12. 12.
    Shilane, P., Min, P., Kazhdan, M., Funkhouser, T.: The princeton shape benchmark. In: Shape Modeling Applications. IEEE (2004)Google Scholar
  13. 13.
    Rifai, D., Maeder, A., Liyanage, L.: A content-based-image-retrieval approach for medical image repositories. In: Proceedings of the 8th Australasian Workshop on Health Informatics and Knowledge Management (HIKM), Sydney, Australia, 27–30 January 2015Google Scholar
  14. 14.
    Macko, M., Mikołajewska, E., Szczepańsk, Z., Augustyńska, B., Mikołajewski, D.: Repository of images for reverse engineering and medical simulation purposes. Med. Biol. Sci. 30(3), 23–29 (2016)CrossRefGoogle Scholar
  15. 15.
    Nakamura, S., Sawada, M., Aoki, Y., Hartono, P., Hashimoto, S.: Flower image database construction and its retrieval. In: Proceedings of the 7th Korea-Japan joint Workshop on Computer Vision, pp. 37–42 (2001)Google Scholar
  16. 16.
    Okamura, T., Toguro, M., Iwasaki, M., Hartono, P., Hashimoto, S.: Construction of a flower image database with feature and index-based searching mechanism. In: 5th International Workshop on Image Analysis for Multimedia Interactive Services (2004)Google Scholar
  17. 17.
    Martin, A.C., Harvey, W.J.: The global pollen project: a new tool for pollen identification and the dissemination of physical reference collections. Methods Ecol. Evol. 8, 892–897 (2017)CrossRefGoogle Scholar
  18. 18.
    Lian, Z., et al.: SHREC’11 track: shape retrieval on non-rigid 3D watertight meshes. In: Proceedings of the ACM workshop on 3D object retrieval, 3DOR 2010. ACM (2011)Google Scholar
  19. 19.
    Li, B., et al.: SHREC’12 track: generic 3D shape retrieval. In: Proceedings of Eurographics Workshop on 3D Object Retrieval, pp. 119–126 (2012)Google Scholar
  20. 20.
    Li, B., et al.: SHREC’14 track: large scale comprehensive 3D shape retrieval. In: Proceedings of 7th Eurographics Workshop on 3D Object Retrieval, 6th April, France, pp. 131–140 (2014)Google Scholar
  21. 21.
    Krause, J., Stark, M., Deng, J., Fei-Fei, L.: 3D Object Representations for Fine-Grained Categorization. In: Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 554–561 (2013)Google Scholar
  22. 22.
    Janoch, A., et al.: A category-level 3D object dataset: putting the kinect to work. In: Fossati, A., Gall, J., Grabner, H., Ren, X., Konolige, K. (eds.) Consumer Depth Cameras for Computer Vision, pp. 141–165. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-1-4471-4640-7_8CrossRefGoogle Scholar
  23. 23.
    Chudasama, D., Patel, T., Joshi, S.: Image segmentation using morphological operations. Int. J. Comput. Appl. 117(8), 16–19 (2015)Google Scholar
  24. 24.
    Kaur, S., Singh, I.: Comparison between edge detection techniques. Int. J. Comput. Appl. 145(15), 15–18 (2016)Google Scholar
  25. 25.
    Kabade, A.L., Sangam, V.G.: Canny edge detection algorithm. Int. J. Adv. Res. Electron. Commun. Eng. (IJARECE) 5(5), 1292–1295 (2016)Google Scholar
  26. 26.
    Vijayarani, S., Vinupriya, M.: Performance analysis of canny and sobel edge detection algorithms in image mining. Int. J. Innov. Res. Comput. Commun. Eng. 1(8), 1760–1767 (2013)Google Scholar
  27. 27.
    Shokhan, M.H.: An efficient approach for improving canny edge detection algorithm. Int. J. Adv. Eng. Technol. 7(1), 59–65 (2014)Google Scholar
  28. 28.
    Eshaghzadeh, A.: Canny edge detection algorithm application for analysis of the potential field map. In: Conference: Iran, 34th National and the 2nd International Geosciences Congress, January 2016Google Scholar
  29. 29.
    Papandreou, G., Chen, L.C., Murphy, K.: Weakly-and semi-supervised learning of a DCNN for semantic image segmentation. arXiv preprint arXiv:1502.02734 (2015)
  30. 30.
    Chen, L.C., Papandreou, G., Kokkinos, I.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. arXiv preprint arXiv:1606.00915 (2016)
  31. 31.
    Dhore, M.P., Thakare, V.M., Kale, K.V.: Morphological segmentation in document image analysis for text document images. Int. J. Comput. Intell. Tech. 2(2), 35–43 (2011)Google Scholar
  32. 32.
    Vartak, A.P., Mankar, V.: Morphological image segmentation analysis. Int. J. Comput. Sci. Appl. 6(2), 161–165 (2013)Google Scholar
  33. 33.
    Sharma, R., Sharma, R.: Image segmentation using morphological operations for automatic region growing. Int. J. Comput. Sci. Inf. Technol. (IJCSIT) 4(6), 844–847 (2013)Google Scholar
  34. 34.
    Siddiqi, M.H., Ahmad, I., Sulaiman, S.B.: Weed recognition based on erosion and dilation segmentation algorithm. In: Proceedings of International Conference on Education Technology and Computer, pp. 224–228 (2009)Google Scholar
  35. 35.
    Liu, T., Liu, R., Ping-Zeng, Pan, S.: Improved canny algorithm for edge detection of core image. Open Autom. Control Syst. J. 6, 426–432 (2014)CrossRefGoogle Scholar
  36. 36.
    Yeh, Y., Yang, L., Watson, M., Goodman, N., Hanrahan, P.: Synthesizing open worlds with constraints using locally annealed reversible jump MCMC. ACM Trans. Graph. 31(4), 56 (2012)CrossRefGoogle Scholar
  37. 37.
    Ravi, S., Khan, A.: Morphological operations for image processing: understanding and its applications. In: Proceedings of 2nd National Conference on VLSI, Signal processing and Communications NCVSComs, pp. 17–19 (2013)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  1. 1.Computer Science DepartmentUniversity of IbadanIbadanNigeria

Personalised recommendations