Advertisement

An educational Arduino robot for visual Deep Learning experiments

Regular Paper
  • 3 Downloads

Abstract

Deep Learning methods are gaining popularity with both academy and industry. We are in dire need of student affordable educational platform that can support doing Deep Learning experiments. In this paper, we present a mobile robot platform based on Arduino for educational experiments in visual Deep Learning. The educational robot uses Arduino open-source hardware and supports various programming interfaces, including C/C++, Python and Matlab. The robot uses an attached android mobile phone to capture images and video streams. Visual Deep Learning models such as DNNs and CNNs can be examined and practiced with the robot.

Keywords

Arduino robot Deep Learning DNN CNN 

Notes

Acknowledgements

This research was partially supported by NSFC under contract number 61472428 and U1711261.

Compliance with ethical standards

Conflict of interest

The authors whose names are listed immediately below certify that they have no affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript.

References

  1. Arduino: introduction. https://www.arduino.cc/ (2018)
  2. Borenstein, J., Everett, H., Feng, L., Wehe, D.: Mobile robot positioning: sensors and techniques. J. Robot. Syst. 14(4), 231–249 (1997)CrossRefGoogle Scholar
  3. Erdogan, N., Bozeman, T.D.: Models of project-based learning for the 21st century. In: Sahin, A. (ed.) A practice-based model of STEM teaching. SensePublishers, Rotterdam (2015)Google Scholar
  4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: (CVPR 2016) IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA, pp. 770–778 (2016)Google Scholar
  5. Hinton, G., Salakhutdinov, R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  6. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., et al: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861 (2017)
  7. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: alexnet-level accuracy with 50 × fewer parameters and < 0.5 mb model size, arXiv:1602.07360 (2016)
  8. Keras framework: https://keras.io (2018)
  9. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. (NIPS) 60, 1097–1105 (2012a)Google Scholar
  10. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. NIPS 25, 1097–1105 (2012b)Google Scholar
  11. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2344 (1998)CrossRefGoogle Scholar
  12. Loupos, K., Doulamis, A.D., Stentoumis, C., Protopapadakis, E., Makantasis, K., Doulamis, N.D., et al.: Autonomous robotic system for tunnel structural inspection and assessment. Int. J. Intell. Robot. Appl. 2, 43–66 (2017)CrossRefGoogle Scholar
  13. Maas, R., Maehle, E.: An easy to use framework for educational robots. In: (ROBOTIK 2012) Proceedings of the 7th German Conference on Robotics, Germany, pp. 1–5. (2012)Google Scholar
  14. Pinto, T., Cai, L., Wang, C., Tan, X.: Cnt-based sensor arrays for local strain measurements in soft pneumatic actuators. Int. J. Intell. Robot. Appl. 1(2), 157–166 (2017)CrossRefGoogle Scholar
  15. Poon, J., Cui, Y., Valls Miro, J., et al.: Learning from demonstration for locally assistive mobility aids. Int. J. Intell. Robot. Appl. 3(3), 255–268 (2019)CrossRefGoogle Scholar
  16. Processing Project: https://processing.org/ (2018)
  17. Prorok, A., Arfire, A., Bahr, A., Farserotu, J., Martinoli, A.: Indoor navigation research with the khepera iii mobile robot: an experimental baseline with a case-study on ultra-wideband positioning. In: (IPIN 2010) IEEE International Conference on Indoor Positioning and Indoor Navigation, Swizterland, pp. 1–9. (2010)Google Scholar
  18. Stalbaum, T., Hwang, T., Sarah, T., et al.: Bioinspired travelling wave generation in soft-robotics using ionic polymer-metal composites. Int. J. Intell. Robot. Appl. 1(2), 167–179 (2017)CrossRefGoogle Scholar
  19. Tensorflow framework: https://www.tensorflow.org (2018)
  20. Tribelhorn, B., Dodds, Z.: Evaluating the roomba: a low-cost, ubiquitous platform for robotics research and education In: IEEE International Conference on Robotics and Automation, Italy, pp. 1393–1399. (2007)Google Scholar
  21. Wei, P., Chan, S.N., Lee, S., et al.: Mitigating ground effect on mini quadcopters with model reference adaptive control. Int. J. Intell. Robot. Appl. 3(3), 283–297 (2019)CrossRefGoogle Scholar
  22. Xiao, H., Rasul, H., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. (2017). arXiv cs.LG/1708.07747Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.School of Information and DEKE, MOERenmin University of ChinaBeijingChina

Personalised recommendations