Skip to main content

An Abstraction for Correspondence Search Using Task-Based Controls

  • Conference paper
  • First Online:
Computer Vision - ACCV 2014 Workshops (ACCV 2014)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 9009))

Included in the following conference series:

  • 1945 Accesses

Abstract

The correspondence problem (finding matching regions in images) is a fundamental task in computer vision. While the concept is simple, the complexity of feature detectors and descriptors has increased as they provide more efficient and higher quality correspondences. This complexity is a barrier to developers or system designers who wish to use computer vision correspondence techniques within their applications. We have designed a novel abstraction layer which uses a task-based description (covering the conditions of the problem) to allow a user to communicate their requirements for the correspondence search. This is mainly based on the idea of variances which describe how sets of images vary in blur, intensity, angle, etc. Our framework interprets the description and chooses from a set of algorithms those that satisfy the description. Our proof-of-concept implementation demonstrates the link between the description set by the user and the result returned. The abstraction is also at a high enough level to hide implementation and device details, allowing the simple use of hardware acceleration.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.khronos.org/vision.

  2. 2.

    http://openvidia.sourceforge.net.

  3. 3.

    https://developer.apple.com/technologies/mac/graphics-and-animation.html.

  4. 4.

    http://www.shapelogic.org.

  5. 5.

    http://www.mathworks.com/products/computer-vision.

  6. 6.

    http://gandalf-library.sourceforge.net.

  7. 7.

    http://computer-vision-talks.com/articles/2011-08-19-feature-descriptor-comparison-report/.

  8. 8.

    http://lear.inrialpes.fr/people/mikolajczyk/Database/det_eval.html.

References

  1. Miller, G., Fels, S.: OpenVL: a task-based abstraction for developer-friendly computer vision. In: Proceedings of the 13th IEEE Workshop on the Applications of Computer Vision (WACV), WVM 2013, pp. 288–295. IEEE (2013)

    Google Scholar 

  2. Miller, G., Jang, D., Fels, S.: Developer-friendly segmentation using OpenVL, a high-level task-based abstraction. In: Proceedings of the 1st IEEE Workshop on User-Centred Computer Vision (UCCV), WVM 2013, pp. 31–36. IEEE, New York City (2013)

    Google Scholar 

  3. Oleinikov, G., Miller, G., Little, J.J., Fels, S.: Task-based control of articulated human pose detection for openvl. In: Proceedings of the 14th IEEE Winter Conference on Applications of Computer Vision, WACV 2014, pp. 682–689. IEEE, New York City (2014)

    Google Scholar 

  4. Jang, D., Miller, G., Fels, S., Oldridge, S.: User oriented language model for face detection. In: Proceedings of the 1st Workshop on Person-Oriented Vision (POV), WVM 2011, pp. 21–26. IEEE, New York City (2011)

    Google Scholar 

  5. Bradski, G., Kaehler, A.: Learning OpenCV: Computer Vision with the OpenCV Library, 1st edn. O’Reilly Media, Inc., Sebastopol (2008)

    Google Scholar 

  6. Matsuyama, T., Hwang, V.: SIGMA: a framework for image understanding integration of bottom-up and top-down analyses. In: Proceedings of the 9th International Joint Conference on Artificial Intelligence, vol. 2, pp. 908–915. Morgan Kaufmann Publishers Inc. (1985)

    Google Scholar 

  7. Kohl, C., Mundy, J.: The development of the image understanding environment. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, CVPR 1994, pp. 443–447. IEEE Computer Society Press, Los Alamitos (1994)

    Google Scholar 

  8. Clouard, R., Elmoataz, A., Porquet, C., Revenu, M.: Borg: a knowledge-based system for automatic generation of image processing programs. Trans. Pattern Anal. Mach. Intell. 21, 128–144 (1999)

    Article  Google Scholar 

  9. Mundy, J.: The image understanding environment program. IEEE Expert Intell. Syst. Appl. 10, 64–73 (1995)

    Google Scholar 

  10. Panin, G.: Model-based Visual Tracking: The OpenTL Framework, 1st edn. Wiley, Chichester (2011)

    Book  Google Scholar 

  11. Konstantinides, K., Rasure, J.R.: The Khoros software development environment for image and signal processing. IEEE Trans. Image Process. 3, 243–252 (1994)

    Article  Google Scholar 

  12. Peterson, J., Hudak, P., Reid, A., Hager, G.D.: FVision: a declarative language for visual tracking. In: Ramakrishnan, I.V. (ed.) PADL 2001. LNCS, vol. 1990, pp. 304–321. Springer, Heidelberg (2001)

    Google Scholar 

  13. Firschein, O., Strat, T.M.: RADIUS: Image Understanding For Imagery Intelligence, 1st edn. Morgan Kaufmann, San Francisco (1997)

    Google Scholar 

  14. Makarenko, A., Brooks, A., Kaupp, T.: On the benefits of making robotic software frameworks thin. In: Proceedings of the Workshop on Measures and Procedures for the Evaluation of Robot Architectures and Middleware, IROS 2007. IEEE, New York City (2007). Invited Presentation

    Google Scholar 

  15. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91 (2004)

    Article  Google Scholar 

  16. Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). Comput. Vis. Image Underst. 110, 346–359 (2008)

    Article  Google Scholar 

  17. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: Orb: An efficient alternative to sift or surf. In: Proceedings of the 2011 International Conference on Computer Vision, ICCV 2011, pp. 2564–2571. IEEE Computer Society, Washington, DC (2011)

    Google Scholar 

  18. Forssén, P.E., Lowe, D.: Shape descriptors for maximally stable extremal regions. In: IEEE International Conference on Computer Vision, vol. CFP07198-CDR. IEEE Computer Society, Rio de Janeiro (2007)

    Google Scholar 

  19. Ortiz, R.: Freak: Fast retina keypoint. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), CVPR 2012, pp. 510–517. IEEE Computer Society, Washington, DC (2012)

    Google Scholar 

  20. Bauer, J., Sünderhauf, N., Protzel, P.: Comparing several implementations of two recently published feature detectors. Intell. Auton. Veh. 6, 143–148 (2007)

    Google Scholar 

  21. Juan, L., Gwon, O.: A comparison of SIFT, PCA-SIFT and SURF. Int. J. of Image Process. 3, 143–152 (2009)

    Google Scholar 

  22. Yu, G., Morel, J.M.: A fully affine invariant image comparison method. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2009, ICASSP 2009, pp. 1597–1600 (2009)

    Google Scholar 

  23. Morel, J.M., Yu, G.: Asift: a new framework for fully affine invariant image comparison. SIAM J. Img. Sci. 2, 438–469 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  24. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., Gool, L.V.: A comparison of affine region detectors. Int. J. Comput. Vis. 65, 43–72 (2005)

    Article  Google Scholar 

Download references

Acknowledgements

We would like to gratefully acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Graphics, Animation and New Media Network of Centres of Excellence (GRAND NCE).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gregor Miller .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Miller, G., Fels, S. (2015). An Abstraction for Correspondence Search Using Task-Based Controls. In: Jawahar, C., Shan, S. (eds) Computer Vision - ACCV 2014 Workshops. ACCV 2014. Lecture Notes in Computer Science(), vol 9009. Springer, Cham. https://doi.org/10.1007/978-3-319-16631-5_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-16631-5_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-16630-8

  • Online ISBN: 978-3-319-16631-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics