Bio-Inspired Perception Sensor (BIPS) Concept Analysis for Embedded Applications
The Bio-inspired Perception Sensor (BIPS) component is a small and low power bio-inspired on-chip device which has been used in different computer vision applications (traffic analysis, driving assistance, object tracking). It caught the attention of the embedded vision community, since its specifications could help overcoming the time, size, weight and energy bottlenecks that still limits the current development of computer vision systems. For a long time, the lack of mathematical and algorithmic models of the component has prevented it to spread among the research community. But the recent formalization of BIPS basic functions and mechanisms has allowed to develop numerical models and simulators, in order to better evaluate the advantages and limitations of the concept. In this paper, we experimentally address the generalization capability of the BIPS concept, by evaluating it on the road lane detection application. This allows to illustrate how its parameters can be adapted for a specific vision task. This approach permits to automatically instantiate the main parameters, which stabilizes the system output and improves its performance. The obtained results reach the level of the caltech-lanes reference.
KeywordsBio-inspired Embedded vision Road lane detection
We would like to thank P. Pirim for his help in the understanding of the BIPS component and its extended possibilities, and for sharing his valuable knowledge.
- 1.Aly, M.: Real time detection of lane markers in urban streets. In: 2008 IEEE Intelligent Vehicles Symposium, pp. 7–12, June 2008Google Scholar
- 3.Borkar, A., Hayes, M., Smith, M.T.: Robust lane detection and tracking with RANSAC and Kalman filter. In: 2009 16th IEEE International Conference on Image Processing (ICIP), pp. 3261–3264, November 2009Google Scholar
- 4.Ehsan, S., McDonald-Maier, K.D.: On-board vision processing for small UAVs: time to rethink strategy. In: NASA/ESA Conference on Adaptive Hardware and Systems, pp. 75–81, July 2009Google Scholar
- 5.Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3354–3361. IEEE Computer Society, Washington, DC, June 2012Google Scholar
- 6.Hubel, D.H.: Eye, Brain, and Vision (Scientific American Library, No 22), 2nd edn. W. H. Freeman, New York (1995)Google Scholar
- 9.Nieto, M., Salgado, L., Jaureguizar, F., Arrospide, J.: Robust multiple lane road modeling based on perspective analysis. In: IEEE International Conference on Image Processing, pp. 2396–2399, October 2008Google Scholar
- 10.Ota, K., Dao, M.S., Mezaris, V., de Natale, F.G.B.: Deep learning for mobile multimedia: a survey. ACM Trans. Multimed. Comput. Commun. Appl. 13, 34:1–34:22 (2017)Google Scholar
- 11.Pirim, P.: Processeur de perception bio-inspiré : une approche neuromorphique. Techniques de l’ingénieur - Innovations en électronique et optoélectronique (May 2015)Google Scholar
- 12.Pirim, P.: Perceptive invariance and associative memory between perception and semantic representation USER a Universal SEmantic Representation implemented in a System on Chip (SoC). In: Lepora, N.F.F., Mura, A., Mangan, M., Verschure, P.F.M.J.F.M.J., Desmulliez, M., Prescott, T.J.J. (eds.) Living Machines 2016. LNCS (LNAI), vol. 9793, pp. 275–287. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-42417-0_25CrossRefGoogle Scholar
- 14.Sarrabezolles, L., Manzanera, A., Hueber, N., Perrot, M., Raymond, P.: Dual field combination for unmanned video surveillance. In: SPIE Defense and Commercial Sensing, Real-Time Image and Video Processing, vol. 10223. International Society for Optics and Photonics, Anaheim, May 2017Google Scholar