Abstract
In the previous chapter, we developed a model for the separation or decoupling of m additive images from a temporal sequence of the sum of these images, where the images are translating with distinct and unique velocities (including zero velocity). We presented a solution to the problem for the case where m = 2, i.e. the figure/ground scenario, and we demonstrated this solution for a number of images. In this chapter, we will adapt the additive model to embrace situations where the images are not additive but are, instead, formed by the superposition of an occluding object or objects on an occluded background. That is, the approach will be modified to effect a model-free segmentation of objects undergoing translatory fronto-parallel motion in dynamic image sequences. As with the additive case, object velocities of one pixel per frame are sufficient to achieve segmentation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer Science+Business Media New York
About this chapter
Cite this chapter
Vernon, D. (2001). Monocular Vision — Segmentation in Occluding Images. In: Fourier Vision. The Springer International Series in Engineering and Computer Science, vol 623. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-1413-8_4
Download citation
DOI: https://doi.org/10.1007/978-1-4615-1413-8_4
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-5541-0
Online ISBN: 978-1-4615-1413-8
eBook Packages: Springer Book Archive