Advertisement

Task Scheduling in Large Camera Networks

  • Ser-Nam Lim
  • Larry Davis
  • Anurag Mittal
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4843)

Abstract

Camera networks are increasingly being deployed for security. In most of these camera networks, video sequences are captured, transmitted and archived continuously from all cameras, creating enormous stress on available transmission bandwidth, storage space and computing facilities. We describe an intelligent control system for scheduling Pan-Tilt-Zoom cameras to capture video only when task-specific requirements can be satisfied. These videos are collected in real time during predicted temporal “windows of opportunity”. We present a scalable algorithm that constructs schedules in which multiple tasks can possibly be satisfied simultaneously by a given camera. We describe two scheduling algorithms: a greedy algorithm and another based on Dynamic Programming (DP). We analyze their approximation factors and present simulations that show that the DP method is advantageous for large camera networks in terms of task coverage. Results from a prototype real time active camera system however reveal that the greedy algorithm performs faster than the DP algorithm, making it more suitable for a real time system. The prototype system, built using existing low-level vision algorithms, also illustrates the applicability of our algorithms.

Keywords

Schedule Problem Dynamic Programming Source Node Greedy Algorithm Directed Acyclic Graph 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Isard, M., Blake, A.: Condensation - conditional density propagation for visual tracking. International journal of computer vision 29, 5–28 (1998)CrossRefGoogle Scholar
  2. 2.
    Mittal, A., Davis, L.: M2tracker: A multi-view approach to segmenting and tracking people in a cluttered scene. In: European Conference on Computer Vision, Copenhagen, Denmark (2002)Google Scholar
  3. 3.
    Zhao, T., Nevatia, R.: Bayesian human segmentation in crowded situation. In: IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society Press, Los Alamitos (2003)Google Scholar
  4. 4.
    Kaucic, R., Perera, A.A., Brooksby, G., Kaufhold, J., Hoogs, A.: A unified framework for tracking through occlusions and across sensor gaps. In: IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, IEEE Computer Society Press, Los Alamitos (2005)Google Scholar
  5. 5.
    Rahimi, A., Dunagan, B., Darrell, T.: Simultaneous calibration and tracking with a network of non-overlapping sensors. In: IEEE Conference on Computer Vision and Pattern Recognition, Washington DC, IEEE Computer Society Press, Los Alamitos (2004)Google Scholar
  6. 6.
    Lim, S.N., Davis, L.S., Mittal, A.: Constructing task visibility intervals for video surveillance. ACM Multimedia Systems  (2006)Google Scholar
  7. 7.
    Abrams, S., Allen, P.K., Tarabanis, K.: Computing camera viewpoints in an active robot work cell. International Journal of Robotics Research 18 (1999)Google Scholar
  8. 8.
    Tarabanis, K., Tsai, R., Allen, P.: The mvp sensor planning system for robotic vision tasks. IEEE Transactions on Robotics and Automation 11, 72–85 (1995)CrossRefGoogle Scholar
  9. 9.
    Grimson, W.E.L., Stauffer, C.: Adaptive background mixture models for real-time tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society Press, Los Alamitos (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Ser-Nam Lim
    • 1
  • Larry Davis
    • 2
  • Anurag Mittal
    • 3
  1. 1.Cognex Corp., Natick, MAUSA
  2. 2.CS Dept., University of Maryland, College Park, MarylandUSA
  3. 3.CSE Dept., IIT, MadrasIndia

Personalised recommendations