Advertisement

A Double Competitive Strategy-Based Learning Automata Algorithm

  • Chong Di
  • Mingda Guo
  • Jinchao Huang
  • Shenghong LiEmail author
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 517)

Abstract

Learning automaton is considered as one of the most potent tools in reinforcement learning. The family of estimator algorithms is proposed to improve the convergence rate of learning automaton and has made significant achievements. However, the estimators perform poorly on estimating actions’ reward probabilities in the initial stage of the learning process. In this situation, a lot of rewards would be assigned to nonoptimal actions. Thus, numerous extra iterations are required to compensate for these wrong rewards. To further improve the speed of convergence, we propose a new P-model absorbing learning automaton using a double competitive strategy to update the action probability vector. The proposed scheme overcomes the drawbacks of the existing action probability vector updating strategy. And, extensive experimental results in benchmark environments demonstrate that the proposed learning automata perform more effectively than the most classic learning automaton \(SE_{RI}\) and the current fastest learning automaton \(DGCPA^{*}\).

Keywords

Learning automata Stationary environments Estimator algorithms Reinforcement learning 

Notes

Acknowledgement

This research work is funded by the National Key Research and Development Project of China (2016YFB0801003).

References

  1. 1.
    Narendra, K.S., Thathachar, M.A.: Learning automata: an introduction. Courier Corporation (2012)Google Scholar
  2. 2.
    Wang, Y., et al.: Learning automata based cooperative student-team in tutorial-like system. In: International Conference on Intelligent Computing. Springer, Cham (2014)CrossRefGoogle Scholar
  3. 3.
    Zhao, Y., et al.: A cellular learning automata based algorithm for detecting community structure in complex networks. Neurocomputing 151, 1216–1226 (2015)CrossRefGoogle Scholar
  4. 4.
    Jiang, W.: A new class of optimal learning automata. In: International Conference on Intelligent Computing. Springer, Berlin (2011)Google Scholar
  5. 5.
    Thathachar, M.A.L., Sastry, P.S.: A new approach to the design of reinforcement schemes for learning automata. IEEE Trans. Syst. Man Cybern. 1, 168–175 (1985)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Thathachar, M.A., Sastry, P.S.: Estimator algorithms for learning automata (1986)Google Scholar
  7. 7.
    Ge, H., et al.: A novel estimator based learning automata algorithm. Appl. Intell. 42(2), 262–275 (2015)CrossRefGoogle Scholar
  8. 8.
    Jiang, W., et al.: A new prospective for learning automata: a machine learning approach. Neurocomputing 188, 319–325 (2016)CrossRefGoogle Scholar
  9. 9.
    Sastry, P.S.: Systems of learning automata: Estimator algorithms applications. Dissertation, Ph.D. thesis, Department of Electrical Engineering, Indian Institute of Science, Bangalore, India (1985)Google Scholar
  10. 10.
    Thathachar, M.A.L.: Discretized reward-inaction learning automata. J. Cybern. Inf. Sci. 2, 24–29 (1979)Google Scholar
  11. 11.
    Papadimitriou, G.I., Sklira, M., Pomportsis, A.S.: A new class of /spl epsi/-optimal learning automata. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 34(1), 246–254 (2004)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Chong Di
    • 1
  • Mingda Guo
    • 2
  • Jinchao Huang
    • 1
  • Shenghong Li
    • 1
    Email author
  1. 1.School of Cyber Space SecurityShanghai Jiao Tong UniversityShanghaiChina
  2. 2.School of Mechanical Design manufacture and Automation MajorTaiyuan University of TechnologyTaiyuanChina

Personalised recommendations