Ball Localization for Robocup Soccer Using Convolutional Neural Networks

  • Daniel SpeckEmail author
  • Pablo Barros
  • Cornelius Weber
  • Stefan Wermter
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9776)


In RoboCup soccer, ball localization is an important and challenging task, especially since the last change of the rule which allows 50% of the ball’s surface to be of any color or pattern while the rest must remain white. Multi-color balls have changing color histograms and patterns in dependence of the current orientation and movement. This paper presents a neural approach using a convolutional neural network (CNN) to localize the ball in various scenes. CNNs were used in several image recognition tasks, particularly because of their capability to learn invariances in images. In this work we use CNNs to locate a ball by training two output layers, representing the x- and y-coordinates, with normal distributions fitted around the ball. Therefore the network not only locates the ball’s position but also provides an estimation of the noise. The architecture processes the whole image in full size, no sliding-window approach is used.


RoboCup Convolutional neural network Deep learning Tensorflow Ball detection Ball localization Noise Filtering 



We would like to thank Stefan Heinrich for reviewing this paper, the Hamburg Bit-Bots ( (esp. Nils Rokita, Fabian Fiedler), for assistance in working with the robots and giving feedback, and Nathan Lintz for constructive discussions about TensorFlow (, which was used to build our architecture. The work was made in collaboration with the TRR 169 “Crossmodal Learning”, funded by the DFG, and partially supported by CAPES Brazilian Federal Agency for the Support and Evaluation of Graduate Education (p.n.5951135).


  1. 1.
    Bestmann, M., Reichardt, B., Wasserfall, F.: Hambot: an open source robot for RoboCup Soccer. In: Almeida, L., Ji, J., Steinbauer, G., Luke, S. (eds.) RoboCup 2015. LNCS, vol. 9513, pp. 339–346. Springer, Cham (2015). Scholar
  2. 2.
    Coath, G., Musumeci, P.: Adaptive arc fitting for ball detection in Robocup. In: Proceedings of APRS Workshop on Digital Image Analysing, Brisbane, Australia, pp. 63–68 (2003)Google Scholar
  3. 3.
    Erhan, D., Szegedy, C., Toshev, A., Anguelov, D.: Deep neural networks for object detection. In: Advances in Neural Information Processing Systems, pp. 2553–2561 (2013)Google Scholar
  4. 4.
    Erhan, D., Szegedy, C., Toshev, A., Anguelov, D.: Scalable object detection using deep neural networks. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2147–2154 (2014)Google Scholar
  5. 5.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS, vol. 9, pp. 249–256 (2010)Google Scholar
  6. 6.
    Hanek, R., Schmitt, T., Buck, S., Beetz, M.: Towards RoboCup without color labeling. In: Kaminka, G.A., Lima, P.U., Rojas, R. (eds.) RoboCup 2002. LNCS, vol. 2752, pp. 179–194. Springer, Heidelberg (2003). Scholar
  7. 7.
    Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, pp. 1–18 (2012)
  8. 8.
    Jamzad, M., Sadjad, B.S., Mirrokni, V.S., Kazemi, M., Chitsaz, H., Heydarnoori, A., Hajiaghai, M.T., Chiniforooshan, E.: A fast vision system for middle size robots in RoboCup. In: Birk, A., Coradeschi, S., Tadokoro, S. (eds.) RoboCup 2001. LNCS, vol. 2377, pp. 71–80. Springer, Heidelberg (2002). Scholar
  9. 9.
    Kalman, R.E.: A New Approach to Linear Filtering and Prediction Problems (1960)CrossRefGoogle Scholar
  10. 10.
    Kingma, D.P., Ba, J.L.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations, pp. 1–13 (2015)Google Scholar
  11. 11.
    Krizhevsky, A., Hinton, G.: Convolutional Deep Belief Networks on CIFAR-10 (2010, unpublished manuscript)Google Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1–9 (2012)Google Scholar
  13. 13.
    Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 10, 1–40 (2009)zbMATHGoogle Scholar
  14. 14.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  15. 15.
    LeCun, Y., Kavukcuoglu, K., Farabet, C.: Convolutional networks and applications in vision. In: ISCAS 2010–2010 IEEE International Symposium on Circuits and Systems: Nano-Bio Circuit Fabrics and Systems, pp. 253–256 (2010)Google Scholar
  16. 16.
    Malik, J., Girshick, R., Donahue, J., Darrell, T.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 580–587 (2014)Google Scholar
  17. 17.
    Murch, C., Chalup, S.: Combining edge detection and colour segmentation in the four-legged league. In: Australasian Conference on Robotics and Automation (ACRA 2004) (2004)Google Scholar
  18. 18.
    Nair, V., Hinton, G.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML 2010), pp. 807–814 (2010)Google Scholar
  19. 19.
    Oquab, M.: Is object localization for free? Weakly-supervised learning with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 685–694 (2015)Google Scholar
  20. 20.
    RoboCup-Team: RoboCup Soccer Humanoid League Rules and Setup (2015)Google Scholar
  21. 21.
    Sermanet, P., Eigen, D., Zhang, X.: OverFeat: integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229 (2013)
  22. 22.
    Szegedy, C., Reed, S., Sermanet, P., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions, pp. 1–12 (2014)Google Scholar
  23. 23.
    Szirtes, G., Póczos, B., Lrincz, A.: Neural Kalman filter. Neurocomputing 65–66, 349–355 (2005)CrossRefGoogle Scholar
  24. 24.
    Zhang, K., Liu, Q., Wu, Y., Yang, M.H.: Robust visual tracking via convolutional networks. CoRR abs/1501.0, pp. 1–18 (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Daniel Speck
    • 1
    Email author
  • Pablo Barros
    • 1
  • Cornelius Weber
    • 1
  • Stefan Wermter
    • 1
  1. 1.Department of Informatics, Knowledge Technology, WTM Hamburg Bit-BotsUniversity of HamburgHamburgGermany

Personalised recommendations