Advertisement

Bio-Inspired Perception Sensor (BIPS) Concept Analysis for Embedded Applications

  • Louise SarrabezollesEmail author
  • Antoine Manzanera
  • Nicolas Hueber
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11401)

Abstract

The Bio-inspired Perception Sensor (BIPS) component is a small and low power bio-inspired on-chip device which has been used in different computer vision applications (traffic analysis, driving assistance, object tracking). It caught the attention of the embedded vision community, since its specifications could help overcoming the time, size, weight and energy bottlenecks that still limits the current development of computer vision systems. For a long time, the lack of mathematical and algorithmic models of the component has prevented it to spread among the research community. But the recent formalization of BIPS basic functions and mechanisms has allowed to develop numerical models and simulators, in order to better evaluate the advantages and limitations of the concept. In this paper, we experimentally address the generalization capability of the BIPS concept, by evaluating it on the road lane detection application. This allows to illustrate how its parameters can be adapted for a specific vision task. This approach permits to automatically instantiate the main parameters, which stabilizes the system output and improves its performance. The obtained results reach the level of the caltech-lanes reference.

Keywords

Bio-inspired Embedded vision Road lane detection 

Notes

Acknowledgement

We would like to thank P. Pirim for his help in the understanding of the BIPS component and its extended possibilities, and for sharing his valuable knowledge.

References

  1. 1.
    Aly, M.: Real time detection of lane markers in urban streets. In: 2008 IEEE Intelligent Vehicles Symposium, pp. 7–12, June 2008Google Scholar
  2. 2.
    Bar Hillel, A., Lerner, R., Levi, D., Raz, G.: Recent progress in road and lane detection: a survey. Mach. Vis. Appl. 25, 727–745 (2014)CrossRefGoogle Scholar
  3. 3.
    Borkar, A., Hayes, M., Smith, M.T.: Robust lane detection and tracking with RANSAC and Kalman filter. In: 2009 16th IEEE International Conference on Image Processing (ICIP), pp. 3261–3264, November 2009Google Scholar
  4. 4.
    Ehsan, S., McDonald-Maier, K.D.: On-board vision processing for small UAVs: time to rethink strategy. In: NASA/ESA Conference on Adaptive Hardware and Systems, pp. 75–81, July 2009Google Scholar
  5. 5.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3354–3361. IEEE Computer Society, Washington, DC, June 2012Google Scholar
  6. 6.
    Hubel, D.H.: Eye, Brain, and Vision (Scientific American Library, No 22), 2nd edn. W. H. Freeman, New York (1995)Google Scholar
  7. 7.
    Jiongjiong, W., et al.: Relationship between ventral stream for object vision and dorsal stream for spatial vision: an fMRI+ERP study. Hum. Brain Mapp. 8(4), 170–181 (1999)CrossRefGoogle Scholar
  8. 8.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  9. 9.
    Nieto, M., Salgado, L., Jaureguizar, F., Arrospide, J.: Robust multiple lane road modeling based on perspective analysis. In: IEEE International Conference on Image Processing, pp. 2396–2399, October 2008Google Scholar
  10. 10.
    Ota, K., Dao, M.S., Mezaris, V., de Natale, F.G.B.: Deep learning for mobile multimedia: a survey. ACM Trans. Multimed. Comput. Commun. Appl. 13, 34:1–34:22 (2017)Google Scholar
  11. 11.
    Pirim, P.: Processeur de perception bio-inspiré : une approche neuromorphique. Techniques de l’ingénieur - Innovations en électronique et optoélectronique (May 2015)Google Scholar
  12. 12.
    Pirim, P.: Perceptive invariance and associative memory between perception and semantic representation USER a Universal SEmantic Representation implemented in a System on Chip (SoC). In: Lepora, N.F.F., Mura, A., Mangan, M., Verschure, P.F.M.J.F.M.J., Desmulliez, M., Prescott, T.J.J. (eds.) Living Machines 2016. LNCS (LNAI), vol. 9793, pp. 275–287. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-42417-0_25CrossRefGoogle Scholar
  13. 13.
    Bach-y Rita, P., Tyler, M.E., Kaczmarek, K.A.: Seeing with the Brain. Int. J. Hum. Comput. Interact. 15(2), 285–295 (2003)CrossRefGoogle Scholar
  14. 14.
    Sarrabezolles, L., Manzanera, A., Hueber, N., Perrot, M., Raymond, P.: Dual field combination for unmanned video surveillance. In: SPIE Defense and Commercial Sensing, Real-Time Image and Video Processing, vol. 10223. International Society for Optics and Photonics, Anaheim, May 2017Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.French-German Research Institute of Saint-LouisSaint-LouisFrance
  2. 2.ENSTA ParisTechPalaiseauFrance

Personalised recommendations