A DR algorithm based on artificial potential field method
- 137 Downloads
Considering player entity’s motion regularity into DR (Dead Reckoning) algorithm can improve its prediction accuracy in MMOG (Massively Multiplayer Online Games), a novel DR algorithm was proposed to solve this problem in this paper. First the artificial potential field model of player entities is created, and then the acceleration of player entities is weighted with the acceleration produced by the potential field force and the acceleration reckoned by the traditional DR algorithm. In order to calculate the weight, Q-Learning algorithm is used. The experiments show that the method can improve prediction accuracy and reduce the network traffic.
KeywordsMMOG Dead reckoning algorithm Artificial potential field Q-Learning
This paper was supported by a grant from the Natural Science Foundation of Liaoning Province of China (20052007) and Foundation of Liaoning Educational Committee (2004D116).
Special thanks to colleagues in our lab helped us in experimental design.
- 1.Bridle JS (1990) Training stochastic model recognition algorithms as networks can lead to maximum mutual information estimates of parameters. In: Touretzky DS (ed) Advances in neural information processing systems 2. Morgan Kaufmann, San Francisco, pp 211–217Google Scholar
- 2.Delaney D, Ward T, McLoone S (2003) Reducing update packets in distributed interactive applications using a hybrid approach. ISCA PDCS 2003:417–422Google Scholar
- 3.Delaney D, McLoone S, Ward T (2005) A novel convergence algorithm for the hybrid strategy model packet reduction technique. IEE ISSC 2005:118–123Google Scholar
- 6.Madden D, Delaney D, McLoone S, Ward T (2004) Visibility path-finding in relation to hybrid strategy-based models in distributed interactive applications. DS-RT 2004:91–97Google Scholar
- 7.Madden D, Delaney D, McLoone S, Ward T (2004) Exploring the spatial density of strategy models in a realistic distributed interactive application. DS-RT 2004:210–213Google Scholar
- 8.Marshall D, McCoy A, Delaney D, Ward T (2004) A realistic distributed interactive application testbed for static and dynamic entity state data acquisition. IEE ISSC 2004:83–88Google Scholar
- 9.McCoy A, Delaney D, McLoone S, Ward T (2004) Towards statistical client prediction—analysis of user behavior in distributed interactive media. CGAIDE 2004:144–149Google Scholar
- 11.McCoy A, Ward T, McLoone S, Delaney D (2007) Multistep-ahead neural-network predictors for network traffic reduction in distributed interactive applications. Trans Model Comput Simul 17(4):1–30Google Scholar
- 12.Singhal SK, Cheriton DR (1995) Exploiting position history for efficient remote rendering in networked virtual reality. Presence Teleoper Virtual Environ 4(2):169–193Google Scholar
- 13.Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT, Cambridge, MAGoogle Scholar
- 14.Watkin CJ, Dayan P (1992) Q learning. Mach Lang 8:279–292Google Scholar
- 15.Yang M, Zhang B, Ma P, Wang ZC (2000) Research on improving dead reckoning model in DIS. J Syst Simul 12(5):439–441Google Scholar
- 16.Yang HY, You ZS, Zhang JW, Yu B (2000) Research on dead reckoning algorithms in distributed air-ground interactive simulation systems. J Comput Res Dev 37:954–961Google Scholar