Advertisement

Object Guided Beam Steering Algorithm for Optical Phased Array (OPA) LIDAR

  • Zhiqing WangEmail author
  • Zhiyu Xiang
  • Eryun Liu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11935)

Abstract

As a fundamental sensor for autonomous driving, light detection and ranging (LIDAR) has gained increasing attentions in recent years. Optical phased array (OPA) LIDAR as a solid-state solution with the advantages of durability and low cost has been actively researched in both the academic and industry fields. Beam steering is a critical problem in OPA LIDAR where the beam can be controlled by software instantaneously. In this paper, we propose an object guided beam steering algorithm where the beams are allocated according to the detected objects in current frame of image. A series of rules are designed to assign different weights to different regions in the scene. We evaluated the algorithm in a simulated environment and the experimental results demonstrated the effectiveness of the proposed algorithm.

Keywords

OPA LIDAR Beam steering Object detection Point cloud segmentation 

References

  1. 1.
    Bo, L.: 3d fully convolutional network for vehicle detection in point cloud (2016)Google Scholar
  2. 2.
    Brekke, Å., Vatsendvik, F., Lindseth, F.: Multimodal 3d object detection from simulated pretraining. arXiv preprint arXiv:1905.07754 (2019)
  3. 3.
    Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Proceedings of the 1st Annual Conference on Robot Learning, pp. 1–16 (2017)Google Scholar
  4. 4.
    Douillard, B., et al.: On the segmentation of 3d lidar point clouds. In: IEEE International Conference on Robotics & Automation (2011)Google Scholar
  5. 5.
    Eldada, L.: Planar beam forming and steering optical phased array chip and method of using same, 5 September 2017. US Patent 9,753,351Google Scholar
  6. 6.
    Eldada, L.: Three-dimensional-mapping two-dimensional-scanning lidar based on one-dimensional-steering optical phased arrays and method of using same, 16 January 2018. US Patent 9,869,753Google Scholar
  7. 7.
    Fujita, J., Eldada, L.: Low cost and compact optical phased array with electro-optic beam steering, 3 May 2018. US Patent App. 15/342,958Google Scholar
  8. 8.
    Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the kitti dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRefGoogle Scholar
  9. 9.
    Girshick, R.: Fast R-CNN. In: The IEEE International Conference on Computer Vision (ICCV), December 2015Google Scholar
  10. 10.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014Google Scholar
  11. 11.
    Hall, D.S.: High definition lidar system, 28 June 2011. US Patent 7,969,558Google Scholar
  12. 12.
    Hall, D.S.: Color lidar scanner, 18 March 2014. US Patent 8,675,181Google Scholar
  13. 13.
    Heck, M.J.: Highly integrated optical phased arrays: photonic integrated circuits for optical beam shaping and beam steering. Nanophotonics 6(1), 93 (2017)CrossRefGoogle Scholar
  14. 14.
    Jensen, T., Siercks, K.: Laser scanner, 26 April 2011. US Patent 7,933,055Google Scholar
  15. 15.
    Kaneko, M., Iwami, K., Ogawa, T., Yamasaki, T., Aizawa, K.: Mask-slam: robust feature-based monocular slam by masking using semantic segmentation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2018)Google Scholar
  16. 16.
    Kato, S., Takeuchi, E., Ishiguro, Y., Ninomiya, Y., Takeda, K., Hamada, T.: An open approach to autonomous vehicles. IEEE Micro 35(6), 60–68 (2015)CrossRefGoogle Scholar
  17. 17.
    Ku, J., Mozifian, M., Lee, J., Harakeh, A., Waslander, S.L.: Joint 3d proposal generation and object detection from view aggregation. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–8. IEEE (2018)Google Scholar
  18. 18.
    Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)Google Scholar
  19. 19.
    Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)Google Scholar
  20. 20.
    Liu, W., et al.: SSD: Single Shot MultiBox Detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  21. 21.
    Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum PointNets for 3D object detection from RGB-D data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 918–927 (2018)Google Scholar
  22. 22.
    Kiran, B.R., et al.: Real-time dynamic object detection for autonomous driving using prior 3D-Maps. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11133, pp. 567–582. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-11021-5_35CrossRefGoogle Scholar
  23. 23.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016Google Scholar
  24. 24.
    Redmon, J., Farhadi, A.: Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018)
  25. 25.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 91–99. Curran Associates, Inc. (2015). http://papers.nips.cc/paper/5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks.pdf
  26. 26.
    Shah, S., Dey, D., Lovett, C., Kapoor, A.: AirSim: high-fidelity visual and physical simulation for autonomous vehicles. In: Hutter, M., Siegwart, R. (eds.) Field and Service Robotics. SPAR, vol. 5, pp. 621–635. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-67361-5_40CrossRefGoogle Scholar
  27. 27.
    Skirlo, S., et al.: Methods and systems for optical beam steering, 16 April 2019. US Patent App. 10/261,389Google Scholar
  28. 28.
    Sobh, I., et al.: End-to-end multi-modal sensors fusion system for urban automated driving (2018)Google Scholar
  29. 29.
    Wang, Y., Shi, T., Yun, P., Tai, L., Liu, M.: PointSeg: real-time semantic segmentation based on 3D lidar point cloud (2018)Google Scholar
  30. 30.
    Wu, B., Wan, A., Yue, X., Keutzer, K.: SqueezeSeg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3d lidar point cloud. arXiv preprint arXiv:1710.07368 (2017)
  31. 31.
    Wu, B., Zhou, X., Zhao, S., Yue, X., Keutzer, K.: SqueezeSegV2: improved model structure and unsupervised domain adaptation for road-object segmentation from a lidar point cloud. arXiv preprint arXiv:1809.08495 (2018)
  32. 32.
    Xu, D., Anguelov, D., Jain, A.: PointFusion: deep sensor fusion for 3D bounding box estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 244–253 (2018)Google Scholar
  33. 33.
    Zhou, Y., Tuzel, O.: VoxelNet: end-to-end learning for point cloud based 3D object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.College of Information Science and Electronic EngineeringZhejiang UniversityHangzhouChina

Personalised recommendations