The Journal of Supercomputing

, Volume 75, Issue 3, pp 1026–1037 | Cite as

Efficient image-based analysis of fruit surfaces using CCD cameras and smartphones

  • J. A. Álvarez-BermejoEmail author
  • D. P. Morales-Santos
  • E. Castillo-Morales
  • L. Parrilla
  • J. A. López-Ramos


Today’s smartphones make a broad variety of sensors (gyroscope, magnetometer, camera, accelerometer, GPS, etc.) readily available and easily accessible through different APIs, favouring the acquisition of data. An everyday usage is the measurement of physical parameters, like sound or acceleration. The advances in terms of level of integration and its application to embedded devices power consumption and wide adoption of system on chips and more recently multiprocessors system on chip mean that a new sort of applications can be addressed. Applications are backed with powerful computing devices depending on batteries. Using these resource-limited devices and their parallel power efficiently is a challenging task. To fully exploit the potential of these hardware devices, parallelism has to be carefully applied to the most resource demanding parts of the application. This paper proposes an efficient image composition method to analyze fruit surfaces using CCD cameras and smartphones. The image composition is done by capturing video from which redundant frames are disposed using a data-parallel local feature detector. Relevant frames are then stitched using direct methods. The proposal was tested in the case of calculating the damaged surface of cherries.


Image processing Homography Direct methods 


  1. 1.
    FAO UN (2017) Post-harvest system and food losses. Retrieved from Accessed May 2017
  2. 2.
    Lang C, Hübert T (2012) A colour ripeness indicator for apples. Food Bioprocess Technol 5:3244–3249CrossRefGoogle Scholar
  3. 3.
    Intaravanne Y, Sumriddetchkajorn S, Nukeaw J (2012) Cell phone-based two dimensional spectral analysis for banana ripeness estimation. Sens Actuators B Chem 168:390–394CrossRefGoogle Scholar
  4. 4.
    Xiao BX, Wang CY, Guo XY, Wu S (2014) Image acquisition system for agricultural contextaware computing. Int J Agric Biol Eng 7(4):75–80Google Scholar
  5. 5.
    Nagle M, Intani K, Romano G, Mahayothee B, Sardsud V, Müller J (2016) Determination of surface color of ‘all yellow’ mango cultivars using computer vision. Int J Agric Biol Eng 9(1):42–50Google Scholar
  6. 6.
    Zhang CL, Zhang SW, Yang JC, Shi YC, Chen J (2012) Apple leaf disease identification using genetic algorithm and correlation based feature selection method. Int J Agric Biol Eng 10(2):74–83Google Scholar
  7. 7.
    Wang L, Tian X, Li A, Li H (2014) Machine vision applications in agricultural food logistics. In: Proceedings—2013 6th International Conference on Business Intelligence and Financial Engineering, BIFE 2013, art. no. 6961105, pp 125–129Google Scholar
  8. 8.
    Pasricha S, Dutt NI (2008) On-chip communication architectures: system on chip interconnect. Elsevier/Morgan Kaufmann Publishers, AmsterdamGoogle Scholar
  9. 9.
    Reid AD, Flautner K, Grimley-Evans E, Lin Y (2008) SoC-C: efficient programming abstractions for heterogeneous multicore systems on chip. In: Altman ER (ed) Proceedings of the 2008 International Conference on Compilers, Architecture and Synthesis for Embedded Systems, CASES’08. ACM, Atlanta, pp 95–104Google Scholar
  10. 10.
    Demaag K, Oliver A, Oostendopr N, Scott K (2012) Practical computer vision with SimpleCV: the simple way to make technology see. O’Reilly Media, SebastopolGoogle Scholar
  11. 11.
    Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110(3):346–359CrossRefGoogle Scholar
  12. 12.
    Mikolajczyk K, Schmid C (2001) Indexing based on scale invariant interest points. In: Proceedings Eighth IEEE International Conference on Computer Vision ICCV 2001, vol 1. Vancouver, Canada, pp 525–531Google Scholar
  13. 13.
    Oyallon E, Rabin J (2013) An analysis and implementation of the surf method, and its comparison to SIFT. IPOL J Image Process On Line, ISSN 2105-1232, preprint FebruaryGoogle Scholar
  14. 14.
    Brown M, Lowe DG (2007) Automatic panoramic image stitching using invariant features. Int J Comput Vis 74(1):59–73CrossRefGoogle Scholar
  15. 15.
    Szeliski R (2006) Image Alignment and Stitching. Technical Report MSR-TR-2004-92. Microsoft ResearchGoogle Scholar
  16. 16.
    Szeliski R (2006) Image alignment and stitching: a tutorial. Found Trends Comput Gr Vis 2(1):1–109MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    OpenGL-ARB (1997) OpenGL reference manual: the official reference document to OpenGL, Version 1.1. 2nd edn. Addison-Wesley, ReadingGoogle Scholar
  18. 18.
    Pulli K, Baksheev A, Kornyakov K, Eruhimov V (2012) Real-time computer vision with OpenCV. Commun ACM 55(6):61CrossRefGoogle Scholar
  19. 19.
    Abeles P (2012) Boofcv. Accessed Nov 2016
  20. 20.
    McIlhagga W (2011) The Canny edge detector revisited. Int J Comput Vis 91(3):251–261MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Xin G, Ke C, Xiaoguang H (2012) An improved Canny edge detection algorithm for color image. In: IEEE 10th International Conference on Industrial Informatics. Beijing, pp 113–117Google Scholar
  22. 22.
    Mikolajczyk K, Schmid C (2002) An affine invariant interest point detector. In: Proceedings of the 7th European Conference on Computer Vision. Copenhagen, Denmark, pp 128–142Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department InformaticsUniversidad de AlmeríaAlmeríaSpain
  2. 2.Department ElectronicsUniversidad de GranadaGranadaSpain
  3. 3.Department MathematicsUniversidad de AlmeríaAlmeríaSpain

Personalised recommendations