Advertisement

Sensitivity Analysis in a Bayesian Network for Modeling an Agent

  • Yoko Ishino
Chapter
Part of the Agent-Based Social Systems book series (ABSS, volume 12)

Abstract

Agent-based social simulation (ABSS) has become a popular method for simulating and visualizing a phenomenon while making it possible to decipher the system’s dynamism. When a large amount of data is used for an agent’s behavior, such as a questionnaire survey, a Bayesian network is often the preferred method for modeling an agent. Based on the data, a Bayesian network is used in ABSS. However, it is very difficult to learn the accurate structure of a Bayesian network from the raw data because there exist many variables and the search space is too wide. This study proposes a new method for obtaining an appropriate structure for a Bayesian network by using sensitivity analysis in a stepwise fashion. This method enables us to find a feature subset, which is good to explain objective variables without reducing the accuracy. A simple Bayesian network structure that maintains accuracy while indicating an agent’s behavior provides ABSS users with an intuitive understanding of the behavioral principle of an agent. To illustrate the effectiveness of the proposed method, data from a questionnaire survey about healthcare electronics was used.

Notes

Acknowledgments

This work was supported by JSPS KAKENHI Grant Numbers JP26560121 and JP26282087.

References

  1. Biesiada J, Duch W (2007) Feature selection for high-dimensional data – a Pearson redundancy based filter. Adv Soft Comput 45:242–249CrossRefGoogle Scholar
  2. Blodgett JG, Anderson RD (2000) A Bayesian network model of the consumer complaint process. J Serv Res 2:321–338CrossRefGoogle Scholar
  3. Blum AL, Langley P (1997) Selection of relevant features and examples in machine learning. Artif Intell 97:245–271CrossRefGoogle Scholar
  4. Blum AL, Rivest RL (1992) Training a 3-node neural networks is NP-complete. Neural Netw 5:117–127CrossRefGoogle Scholar
  5. Cai W, Chen S, Zhang D (2010) A multiobjective simultaneous learning framework for clustering and classification. IEEE Trans Neural Netw 21:185–200CrossRefGoogle Scholar
  6. Chen X, Anantha G, Lin X (2008) Improving Bayesian network structure learning with mutual information-based node ordering in the K2 algorithm. IEEE Trans Knowl Data Eng 20:628–640CrossRefGoogle Scholar
  7. Cho BH, Yu H, Kim K-W, Kim IY, Kim SI (2008) Application of irregular and unbalanced data to predict diabetic nephropathy using visualization and feature selection methods. Artif Intell Med 42:37–53CrossRefGoogle Scholar
  8. Colak S, Isik C (2003) Feature subset selection for blood pressure classification using orthogonal forward selection. In: Proceedings of the IEEE annual northeast bioengineering conference, NEBEC, pp 122–123Google Scholar
  9. Constantinou AC, Freestone M, Marsh W, Coid J (2015) Causal inference for violence risk management and decision support in forensic psychiatry. Decis Support Syst 80:42–55CrossRefGoogle Scholar
  10. Cotter SF, Kreutz-Delgado K, Rao BD (2001) Backward sequential elimination for sparse vector subset selection. Signal Process 81:1849–1864CrossRefGoogle Scholar
  11. Friedman N, Geiger D, Goldszmidt M (1997) Bayesian network classifiers. In: Provan G, Langley P, Smyth P (eds) Machine learning. Kluwer, BostonGoogle Scholar
  12. Friedman N, Nachman I, Pe’er D (1999) Learning Bayesian network structure from massive datasets: the sparse candidate algorithm. In: Proceedings of the 15th conference on artificial intelligence (UAI)Google Scholar
  13. Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182Google Scholar
  14. Heckerman D, Geiger D, Chickering D (1995) Learning Bayesian networks: the combination of knowledge and statistical data. Mach Learn 20:197–243Google Scholar
  15. Holmes DE, Jain LC (2008) Innovations in Bayesian networks: theory and applications. In: Studies in computational intelligence, vol 156. Springer, HeidelbergGoogle Scholar
  16. Hua J, Tembe W Dougherty ER (2008) Feature selection in the classification of high-dimension data. In: Proceedings of 6th IEEE international workshop on genomic signal processing and statistics, pp 1–2Google Scholar
  17. Jensen FV (2001) Bayesian networks and decision graphs. Springer, BerlinCrossRefGoogle Scholar
  18. Jin X, Xu A, Bie R, Guo P (2006) Machine learning techniques and chi-square feature selection for cancer classification using SAGE gene expression profiles. Lect Notes Comput Sci 3916:106–115CrossRefGoogle Scholar
  19. Jordan MI (2004) Graphical models. Stat Sci (Special Issue on Bayesian Statistics) 19:140–155Google Scholar
  20. Kawasaki K, Kondoh E, Chigusa Y, Ujita M, Murakami R et al (2014) Reliable pre-eclampsia pathways based on multiple independent microarray data sets. Mol Hum Reprod 21:217–224CrossRefGoogle Scholar
  21. Khair NM, Hariharan M, Yaacob S, Basah SN (2015) Locality sensitivity discriminant analysis-based feature ranking of human emotion actions recognition. J Phys Ther Sci 27:2649–2653CrossRefGoogle Scholar
  22. Lauritzen SL (1996) Graphical models. Clarendon Press, OxfordGoogle Scholar
  23. Liu H, Motoda H (eds) (1998) Feature extraction, construction and selection: a data mining perspective. Kluwer Academic, BostonGoogle Scholar
  24. Matsumoto O, Miyazaki M, Ishino Y, Takahashi S (2017) Method for getting parameters of agent-based modeling using Bayesian network: a case of medical insurance market. In: Putro US, Ichikawa M, Siallagan M (eds) Agent-based approaches in economics and social complex systems IX. Springer, New York, pp 45–57.  https://doi.org/10.1007/978-981-10-3662-0 CrossRefGoogle Scholar
  25. Mooij J, Kappen H (2007) Sufficient conditions for convergence of the sum–product algorithm. IEEE Trans Inf Theory 53:4422–4437CrossRefGoogle Scholar
  26. Pelikan M, Goldberg DE, Lobo FG (2002) A survey of optimization by building and using probabilistic models. Comput Optim Appl 21:5–20CrossRefGoogle Scholar
  27. Peng H, Long F, Ding C (2005) Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 27:1226–1238CrossRefGoogle Scholar
  28. Sierra B, Lazkano E, Inza I, Merino M, Larrañaga P, Quiroga J (2001) Prototype selection and feature subset selection by estimation of distribution algorithms. A case study in the survival of cirrhotic patients treated with TIPS? Lect Notes Comput Sci 2101:20–29CrossRefGoogle Scholar
  29. Tsukasa I, Takenaka T, Motomura Y (2011) Customer behavior prediction system by large scale data fusion in a retail service. Trans Jpn Soc Artif Intell 26:670–681CrossRefGoogle Scholar
  30. Weiss Y (2000) Correctness of local probability propagation in graphical models with loops. Neural Comput 12:1–41CrossRefGoogle Scholar
  31. Yang J, Honavar V (1998) Feature subset selection using a genetic algorithm. In: Liu H, Motoda H (eds) Feature extraction, construction and selection. Springer, New York ISBN: 978-1-4613-7622-4 (Print) 978-1-4615-5725-8 (Online)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Yamaguchi UniversityGraduate School of Innovation & Technology ManagementTokiwadai, UbeJapan

Personalised recommendations