Skip to main content

Visual Pose Estimation Based on the DenseNet Network

  • Conference paper
  • First Online:
  • 830 Accesses

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 593))

Abstract

An end-to-end neural network model based on DenseNet was designed to estimate the pose of the camera in this paper. The picture frame captured by the camera and the camera position (3-dimensional space coordinates) and pose (quaternion) corresponding to the picture frame are the inputs to the network model. Through the neural network model, the spatial structure information and the higher-layer features in the image are trained and learned, so that the network model finally outputs the 7-dimensional vector representing the camera position (3-dimensional space coordinates) and the pose (quaternion). Due to the pose estimation constraint of the network, the training effect of the model is guaranteed, and the pose estimation ability of the network is improved. The trained model is validated on the StMarysChurch Dataset. The experimental results show that the network model has good performances in accuracy and shorter training time.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   299.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Kendall A, Grimes M, Cipolla R (2015) PoseNet: a convolutional network for real-time 6-DOF camera relocalization. In: CONFERENCE 2015, ICCV, Santiago, Chile

    Google Scholar 

  2. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: CONFERENCE 2015, CVPR, Boston, USA

    Google Scholar 

  3. Qiao Q (2018) Research on visual ego-motion estimation with convolutional neural network. University of Science and Technology of China, Hefei (2018)

    Google Scholar 

  4. Valada A, Radwan N, Burgard W (2018) Deep auxiliary learning for visual localization and odometry. In: CONFERENCE 2018, ICRA, Brisbance, Australia

    Google Scholar 

  5. Mayer N, llg E, Häusser P, Fischer P (2015) A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: CONFERENCE 2015, CVPR, Boston, USA

    Google Scholar 

  6. Dosovitskiy A, Fischer P, Ilg E, Hausser P, Hazırbas C, Golkov V (2015) FlowNet: learning optical flow with convolutional networks. In: CONFERENCE 2015, ICCV, Santiago, Chile

    Google Scholar 

  7. Wu C (2015) Towards linear-time incremental structure from motion. In: CONFERENCE 2015, 3DV, Seattle WA, USA

    Google Scholar 

Download references

Acknowledgments

This work was supported by National Natural Science Foundation of China (grant numbers 61520106010, 61741302).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lixin Lu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lu, L., Zhang, W. (2020). Visual Pose Estimation Based on the DenseNet Network. In: Jia, Y., Du, J., Zhang, W. (eds) Proceedings of 2019 Chinese Intelligent Systems Conference. CISC 2019. Lecture Notes in Electrical Engineering, vol 593. Springer, Singapore. https://doi.org/10.1007/978-981-32-9686-2_18

Download citation

Publish with us

Policies and ethics