Abstract
This report investigates how to track an object based on the matching of points between pairs of consecutive video frames. The approach is especially relevant to support object tracking in close-view video shots, as for example encountered in the context of the Pan-Tilt-Zoom (PTZ) camera autotracking problem. In contrast to many earlier related works, we consider that the matching metric of a point should be adapted to the signal observed in its spatial neighborhood, and introduce a cost-benefit framework to control this adaptation with respect to the global target displacement estimation objective. Hence, the proposed framework explicitly handles the trade-off between the point-level matching metric complexity, and the contribution brought by this metric to solve the target tracking problem. As a consequence, and in contrast with the common assumption that only specific points of interest should be investigated, our framework does not make any a priori assumption about the points that should be considered or ignored by the tracking process. Instead, it states that any point might help in the target displacement estimation, provided that the matching metric is well adapted. Measuring the contribution reliability of a point as the probability that it leads to a crorrect matching decision, we are able to define a global successful target matching criterion. It is then possible to minimize the probability of incorrect matching over the set of possible (point,metric) combinations and to find the optimal aggregation strategy. Our preliminary results demonstrate both the effectiveness and the efficiency of our approach.
Part of this work has been funded by the european FP7 SV3D project, and by the Belgian NSF.
The original version of this chapter was revised: The copyright line was incorrect. This has been corrected. The Erratum to this chapter is available at DOI: 10.1007/978-3-319-02895-8_64
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). Computer Vision and Image Understanding 110(3), 346–359 (2008)
De Neyer, Q., Sun, L., Chaudy, C., Parisot, C., De Vleeschouwer, C.: Demo: Point matching for ptz camera autotracking. In: ICDSC. IEEE (2012)
Denman, S., Fookes, C., Sridharan, S.: Improved simultaneous computation of motion detection and optical flow for object tracking. In: Digital Image Computing: Techniques and Applications, DICTA 2009, pp. 175–182. IEEE (2009)
Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, Manchester, UK, vol. 15, p. 50 (1988)
Hu, W.C.: Adaptive template block-based block matching for object tracking. In: Eighth International Conference on Intelligent Systems Design and Applications, ISDA 2008, pp. 61–64. IEEE (2008)
Lalonde, M., Foucher, S., Gagnon, L., Pronovost, E., Derenne, M., Janelle, A.: A system to automatically track humans and vehicles with a ptz camera. In: Defense and Security Symposium. International Society for Optics and Photonics (2007)
Lowe, D.: Object recognition from local scale-invariant features. In: ICCV, pp. 1150–1157 (1999)
Martello, S., Toth, P.: Knapsack problems: algorithms and computer implementations. John Wiley & Sons, Inc. (1990)
Papanikolopoulos, N.P., Khosla, P.K., Kanade, T.: Visual tracking of a moving target by a camera mounted on a robot: A combination of control and vision. IEEE Transactions on Robotics and Automation 9(1), 14–35 (1993)
Sun, L., De Neyer, Q., De Vleeschouwer, C.: Multimode spatiotemporal background modeling for complex scenes. In: EUSIPCO (2012)
Veenman, C.J., Reinders, M.J.T., Backer, E.: Resolving motion correspondence for densely moving points. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(1), 54–72 (2001)
Xie, Y., Lin, L., Jia, Y.: Tracking objects with adaptive feature patches for ptz camera visual surveillance. In: 2010 20th International Conference on Pattern Recognition (ICPR), pp. 1739–1742. IEEE (2010)
Zhang, Y., Liang, Z., Hou, Z., Wang, H., Tan, M.: An adaptive mixture gaussian background model with online background reconstruction and adjustable foreground mergence time for motion segmentation. In: IEEE International Conference on Industrial Technology, ICIT 2005, pp. 23–27. IEEE (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
De Neyer, Q., De Vleeschouwer, C. (2013). A Resource Allocation Framework for Adaptive Selection of Point Matching Strategies. In: Blanc-Talon, J., Kasinski, A., Philips, W., Popescu, D., Scheunders, P. (eds) Advanced Concepts for Intelligent Vision Systems. ACIVS 2013. Lecture Notes in Computer Science, vol 8192. Springer, Cham. https://doi.org/10.1007/978-3-319-02895-8_33
Download citation
DOI: https://doi.org/10.1007/978-3-319-02895-8_33
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-02894-1
Online ISBN: 978-3-319-02895-8
eBook Packages: Computer ScienceComputer Science (R0)