Skip to main content

Visual sensor networks for infomobility

Abstract

The wide availability of embedded sensor platforms and low-cost cameras—together with the developments in wireless communication—make it now possible the conception of pervasive intelligent systems based on vision. Such systems may be understood as distributed and collaborative sensor networks, able to produce, aggregate and process images in order to understand the observed scene and communicate the relevant information found about it. In this paper, we investigate the peculiarities of visual sensor networks with respect to standard vision systems and we identify possible strategies to accomplish image processing and analysis tasks over them. Although the rather strong constraints in computational and transmission power of embedded platforms that may prevent the use of state of the art computer vision and pattern recognition methods, we argue that multi-node processing methods may be envisaged to decompose a complex task into a hierarchy of computationally simpler problems to be solved over the nodes of the network. These ideas are illustrated by describing an application of visual sensor network to infomobility. In particular, we consider an experimental setting in which several views of a parking lot are acquired by the sensor nodes in the network. By integrating the various views, the network is capable to provide a description of the scene in terms of the available spaces in the parking lot.

This is a preview of subscription content, access via your institution.

References

  1. S. Soro and W. Heinzelmann, “A Survey of Visual Sensor Networks,” Adv. Multimedia Article ID 640386 (2009).

  2. A. Adam, E. Rivlin, I. Shimshoni, and D. Reinitz, “Robust Real-Time Unusual Event Detection Using Multiple Fixed-Location Motions,” IEEE Trans. PAMI 30, 555–560 (2008).

    Google Scholar 

  3. S. Colantonio, D. Conforti, M. Martinelli, D. Moroni, F. Salvetti, and A. Sciacqua, “An Intelligent and Integrated Platform for Supporting the Management of Chronic Heart Failure Patients,” in Proc. Computers in Cardiology (Bologna, 2008), pp. 897–900.

  4. O. Salvetti, E. A. Cetin, and E. Pauwels, Special Issue on Human-Activity Analysis in Multimedia Data. Eurasip J. Adv. Signal Proc. (2008).

  5. P. Pagano, F. Piga, G. Lipari, and Y. Liang, “Visual Tracking Using Sensor Networks,” in Proc. 2nd Int. Conf. Simulation Tools and Techniques, ICST (Rome, 2009), pp. 1–10.

  6. M. Magrini, D. Moroni, C. Nastasi, P. Pagano, M. Petracca, G. Pieri, C. Salvadori, and O. Salvetti, “Image Mining for Informability,” in 3rd Int. Workshop on Image Mining Theory and Applications (INSTICC Press, Angers, 2010), pp. 35–44.

    Google Scholar 

  7. D. Kundur, C. Y. Lin, and C. S. Lu, “Visual Sensor Networks,” EURASIP J. Adv. Signal Proc. Article ID 21515 (2007).

  8. W. Feng, B. Code, E. C. Kaiser, M. Shea, W. Chang Feng, and L. Bavoli, “Panoptes” Scalable Low-Power Video Sensor Networking Technologies,” in Proc. ACM Multimedia (Bercley, CA, 2003), 562–571.

  9. M. H. Rahimi, R. Baer, O. I. Iroezi, J. C. GarcÍa, J. Warrior, D. Estrin, and M. B. Srivastava, “Cyclopes: in situ Image Sensing and Interpretation in Wireless Sensor Networks,” in Proc. SenSys (San Diego, 2005), pp. 192–194.

  10. S. Hengstler, et al., “Mesheye: a Hybrid-Resolution Smart Camera Mote,” in Proc. ACM’2007 (Seoul, 2007), pp. 360–369.

  11. A. Rowe, et al., “CMUcam3: An Open Programmable Embedded Vision Sensor,” Technical Report CMU-RITR-07-13 (Robotics Institute, Pittsburgh, PA, 2007).

    Google Scholar 

  12. P. Chen, et al., “Critic: A Low-Bandwidth Wireless Camera Network Platform,” in Distributed Smart Cameras (2008), pp. 1–10.

  13. TinyOS: Open-Source operation System, Available from: http://www.tinyos.net

  14. http://erika.tuxfamily.org

  15. Contiki: The Operating System for Embedded Smart Objects — the Internet of Things, Available from http://www.sics.se/contiki

  16. P. Remagnino, A. O. Shihab, and G. A. Jones, “Distributed Intelligence for Multi-Camera Visual Surveillance,” Pattern Recogn. 37, 675–689 (2004).

    Article  Google Scholar 

  17. R. J. Radke, S. Andra, O. Al-Kofani, and B. Roysam, “Image Change Detection Algorithms: a Systematic Survey,” IEEE Trans. Image Proc. 14, 294–307 (2005).

    Article  Google Scholar 

  18. T. Yan, D. Ganesan, and R. Marumatha, “Distributed Image Search in Camera Sensor Networks,” in Proc. ACM SenSys (Raleigh, NC, 2008), pp. 155–168.

  19. D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. Comput. Vision 60, 91–110 (2004).

    Article  Google Scholar 

  20. P. A. Viola and M. J. Jones, “Robust Real-Time Face Detection,” Int. J. Comput. Vision 57, 137–154 (2004).

    Article  Google Scholar 

  21. P. Pagano, F. Piga, and Y. Liang, “Real-Time Multi-View Vision Systems Using WSNs,” in Proc. ACM Symp. Appl. Comp. (Honolulu, 2009), pp. 2191–2196.

  22. IPERMOV: A Pervasive and Heterogeneous Infrastructure to Control Urban Mobility in Real-Time, Available from: http://www.ipermob.org

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Magrini.

Additional information

The article is published in the original.

Massimo Magrini received his Laurea degree in Computer Science from Pisa University in 1994. Specialized in musical/audio signal processing he collaborated with various research institutes, participating to several European projects as system designer and developer. He also worked for private companies, as a consultant, planning and developing digital audio devices, now very popular in the audio market. He is also active as an electronic music/new media artist realizing works on all medium, sold in thousands of copies worldwide, presenting his live performances in all European countries. Now he is working as a technologist in the Institute of Information Science and Technologies (ISTI), National Research Council (CNR), Pisa, mainly developing innovative gestural interfaces for creative and rehabilitative purposes.

Paolo Pagano received his M.S. degree in Physics in 1999 from Trieste University (I). In 2003 he received his Ph.D. degree in High Energy Physics from Trieste University having worked for the COMPASS collaboration at CERN (CH). In 2004 he was hired by HISKP at Bonn University (D). In 2006 he received a Master in Computer Science from Scuola Superiore Sant’Anna in Pisa (I). In the same year he joined the REal-TIme System (RETIS) laboratory of the Scuola as associate researcher to work in the domain of Wireless Sensor Networks. In 2009 he joined the CNIT (National Inter-University Consortium for Telecommunications) and was assigned to the Sant’Anna Research Unit as leader of the Real-Time Networks team at the RETIS lab.

Christian Nastasi was born in 1982. He received his B.D. in Computer and Telecommunication Engineering from the University of Messina in 2005 and his M.D. in Computer Engineering from the University of Pisa in 2008. Since November 2008 he is PhD student at the RetisLab, Scuola Superiore Sant’Anna, in Pisa, with a research project on Distributed Vision Systems for Wireless Sensor Networks. He has currently published 4 papers. papers.

Ovidio Salvetti. Director of research at the Institute of Information Science and Technologies (ISTI), National Research Council (CNR), Pisa. Working in the field of theoretical and applied computer vision. His fields of research are image analysis and understanding, pictorial information systems, spatial modeling, and intelligent processes in computer vision. Coauthor of four books and monographs and more than 300 technical and scientific articles, with ten patents regarding systems and software tools for image processing. Has served as a scientific coordinator of several national and European research and industrial projects, in collaboration with Italian and foreign research groups, in the fields of computer vision and high-performance computing for diagnostic imaging. Member of the editorial boards of the international journals Pattern Recognition and Image Analysis and G. Ronchi Foundation Acts. Currently the CNR contact person in ERCIM (the European Research Consortium for Informatics and Mathematics) for the Working Group on Image and Video Understanding and a member of IEEE and of the steering committee of a number of EU projects. Head of the ISTI Signals and Images Laboratory.

Gabriele Pieri (Pescia, 1974), M.Sc. (2000) in Computer Science from the University of Pisa, since 2001 joined the “Signals and Images” Laboratory at ISTI-CNR, Pisa, working in the field of image analysis. Hi main interests include neural networks, machine learning, industrial diagnostics and medical imaging. He is author of more than twenty papers.

Claudio Salvadori was born in Arezzo, Italy in 1978. He received the Laurea degree in Telecommunication Engineering in 2006 from the Universitá degli Studi di Siena, Italy. Since November 2009 he is a PhD student in Wireless Sensor Network at Scuola Superiore Sant’Anna, Pisa, Italy.

Matteo Petracca was born in San Giovanni Rotondo, Italy, in 1979. He received the Laurea degree in Telecommunication Engineering in 2003 and the Ph.D. degree in Information and System Engineering in 2007, both from the Politecnico di Torino, Turin, Italy. From January 2008 to November 2009 he was a post-doc researcher at the Politecnico di Torino working on multimedia processing and transmission over wired and wireless packet networks. He is currently a post-doc researcher in Real-Time Networks at Scuola Superiore Sant’Anna in Pisa.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Magrini, M., Moroni, D., Nastasi, C. et al. Visual sensor networks for infomobility. Pattern Recognit. Image Anal. 21, 20–29 (2011). https://doi.org/10.1134/S1054661811010093

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1054661811010093

Keywords

  • image mining
  • sensor networks
  • infomobility
  • object detection
  • change detection