Skip to main content

Adaptive Learning in Cognitive Radio

  • Living reference work entry
  • First Online:
Handbook of Cognitive Radio
  • 276 Accesses

Abstract

Machine learning is a powerful tool for cognitive radio users to learn its sensing and transmission strategy from the experience. This chapter provides a brief introduction to a variety of machine-learning techniques. The basic setup of machine learning, as well as the dichotomy, is explained. Then, the supervised, unsupervised, semi-supervised, and reinforcement learning techniques are briefly discussed. The single-agent learning is then extended to the case of multiagent learning. Then, the machine-learning techniques are applied in various cases of machine learning, such as channel selection and routing.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Aggarwal CC (2016) Recommender systems: the textbook. Springer, New York

    Book  Google Scholar 

  2. Anthony M, Bartlett P (1999) Neural network learning: theoretical foundations. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  3. Azar Y, Fiat A, Karlin A, McSherry F, Saia J (2001) Spectral analysis of data. In: Proceedings of the 33rd ACM symposium on theory of computing (STOC), 6 July 2001, pp 619–626

    Google Scholar 

  4. Berkvosky S, Kuflik T, Ricci F (2007) Distributed collaborative filtering with domain specialization. In: Proceedings of ACM conference recommender systems (RecSys), 19 Oct 2007, pp 33–40

    Google Scholar 

  5. Bertsekas DP (1987) Dynamic programming: deterministic and stochastic models. Prentice Hall, Englewood Cliffs

    MATH  Google Scholar 

  6. Busoniu L, Babuska R, Schutter BD (2008) A comprehensive survey of multiagent reinforcement learning. IEEE Trans Syst Man Cybern Part C Appl Rev 38(2):156–172

    Article  Google Scholar 

  7. Chapelle O, Scholkopf B, Zien A (2006) Semi-supervised learning. The MIT Press, Cambridge

    Book  Google Scholar 

  8. Devroye L, Lugosi G (2001) Combinatorial methods in density estimation. Springer, New York

    Book  MATH  Google Scholar 

  9. Drineas P, Kerenidis I, Raghavan P (2002) Competitive recommendation systems. In: Proceedings of the 34th ACM symposium on theory of computing (STOC), 19 May 2002, pp 82–90

    Google Scholar 

  10. Fudenberg D, Tirole J (1991) Game theory. The MIT Press, Cambridge

    MATH  Google Scholar 

  11. Gabor Z, Kalmar Z, Szepesvari C (1998) Multi-criteria reinforcement learning. In: Proceedings of the 15th International conference on machine learning (ICML), 24 July 1998, vol 98, pp 197–205

    Google Scholar 

  12. Gittins JC (1979) Bandit processes and dynamic allocation indices. J R Stat Soc Ser B (Stat Methodol) 41(2):148–177

    MathSciNet  MATH  Google Scholar 

  13. Goldberg D, Nichols D, Oki BM, Terry D (1992) Using collaborative filtering to weave an information tapestry. Commun ACM 35(12):61–70

    Article  Google Scholar 

  14. Goodfellow I, Bengio Y (2016) Deep learning. The MIT Press, Cambridge

    MATH  Google Scholar 

  15. Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learning: data mining, inference, and prediction, 2nd edn. Springer, New York

    Book  MATH  Google Scholar 

  16. Kushner HJ, Yin GG (2003) Stochastic approximation and recursive algorithms and applications. Springer, New York

    MATH  Google Scholar 

  17. Lai L, Gamal HE, Jiang H, Poor HV (2011) Cognitive medium access: exploration, exploitation, and competition. IEEE Trans Mob Comput 10(2):239–253

    Article  Google Scholar 

  18. Li H (2009) Multi-agent Q-learning of channel selection in multi-user cognitive radio systems: a two by two case. In: Proceedings of IEEE International conference on systems, man and cybernetics, 11 Oct 2009, pp 1893–1898

    Google Scholar 

  19. Li H (2009) Learning the spectrum via collaborative filtering in cognitive radio networks. In: Proceedings of IEEE symposium on new frontiers in dynamic spectrum (DySPAN), 6 Apr 2010, pp 1–12

    Google Scholar 

  20. Littman ML (2001) Value-function reinforcement learning in Markov games. J Cogn Syst Res 2(1):55–66

    Article  Google Scholar 

  21. Metrick A, Polak B (1994) Fictitious play in 2 × 2 games: a geometric proof of convergence. Econ Theory 4(6):923–933

    Article  MathSciNet  MATH  Google Scholar 

  22. Mohri M, Rostamizadeh A, Talwalkar A (2012) Foundations of machine learning. The MIT Press, Cambridge

    MATH  Google Scholar 

  23. O’Shea TJ, Corgan J, Clancy TC (2016) Convolutional radio modulation recognition networks. In: Proceedings of International conference on engineering applications of neural networks, 2 Sept 2016, pp 213–226

    Google Scholar 

  24. O’Shea TJ, Corgan J, Clancy TC (2016) Unsupervised representation learning of structured radio communication signals. In: Proceedings of 1st International workshop on sensing, processing and learning for intelligent machines (SPLINE), 6 July 2016, pp 1–5

    Google Scholar 

  25. Robbins H (1952) Some aspects of the sequential design of experiments. Bull Am Math Soc 58(5):527–535

    Article  MathSciNet  MATH  Google Scholar 

  26. Robbins H, Monro S (1951) A stochastic approximation method. Ann Math Stat 2:400–407

    Article  MathSciNet  MATH  Google Scholar 

  27. Schapire RE, Freud Y (2014) Boosting: foundations and algorithms. The MIT Press, Cambridge

    Google Scholar 

  28. Schlkopf B, Smola AJ (2001) Learning with kernels: support vector machines, regularization, optimization and beyond. The MIT Press, Cambridge

    Google Scholar 

  29. Sutton RS (1998) Reinforcement learning: an introduction. The MIT Press, Cambridge

    Google Scholar 

  30. Thilina KM, Choi KW, Saquib N, Hossain E (2013) Machine learning techniques for cooperative spectrum sensing in cognitive radio networks. J Sel Areas Commun 31(11):2209–2221

    Article  Google Scholar 

  31. Watkins CJCH (1989) Learning from delayed rewards. Ph.D. thesis, Cambridge University, Cambridge

    Google Scholar 

  32. Wiering M, Otterio M (2012) Reinforcement learning: state-of-the-art. Springer, Berlin/Heidelberg

    Book  Google Scholar 

  33. Zheng K, Li H (2011) Multi-objective reinforcement learning based routing in cognitive radio networks: walking in a random maze. In: Proceedings of IEEE International conferences on computing, networking and communications (ICNC), 30 Jan 2011, pp 359–363

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Husheng Li .

Editor information

Editors and Affiliations

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry

Li, H. (2017). Adaptive Learning in Cognitive Radio. In: Zhang, W. (eds) Handbook of Cognitive Radio . Springer, Singapore. https://doi.org/10.1007/978-981-10-1389-8_41-1

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-1389-8_41-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-1389-8

  • Online ISBN: 978-981-10-1389-8

  • eBook Packages: Springer Reference EngineeringReference Module Computer Science and Engineering

Publish with us

Policies and ethics