Skip to main content

There is a little book with the title “What do you believe, but cannot prove?” (Brockman 2006). In it, the editors compile the answer to that question given by 50 of the greatest thinkers alive. The editors did not solicit my answer, but if they had it might have been “I believe but cannot prove that Artificial Intelligence (AI) and statistics are mostly the same; and when they are not, the differences are within the natural variations occurring within each field.” Fortunately, AI and statistics have already been compared and contrasted thoroughly.1 As a result, there is a large body of knowledge that can be employed to create a proof for my claim. However, I also believe that the body of knowledge accumulated is actually more interesting and fruitful than the original question itself, i.e., whether AI and statistics are the same. In fact, I would claim that it does not matter at all whether AI and statistics are, or are not, the same. One characterization that is probably reasonable is that the difference between AI techniques and traditional statistics is not in kind, but in degree. In other words, one may argue that most techniques belong to a continuum, but with different techniques having different degrees of “AI-ness” or “statistic-ness.” By the way, in this chapter, AI is synonymous with machine learning.

Certainly, however, any problem dealing with data is inherently statistical. As such, tools and concepts that have been traditionally developed in statistical circles should not be ignored. The concepts of a random variable and a probability distribution (or histogram), or the difference between a population and a sample, are all extremely important in the analysis and modeling of data. For instance, the results of a study that ignores the difference between a sample and a population are apt to be generally unreliable (or ungeneralizable), because the results are based on a single sample taken from a larger population, and therefore do not pertain to the latter. The question of whether one can generalize the results from the sample to the population is again one that statistics is designed to address. Alas, some AI-related studies generally ignore such issues.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Bishop, C. M. (1996). Neural networks for pattern recognition. (pp. 482). Oxford: Clarendon Press

    Google Scholar 

  • Breiman, L. (1996). Bagging predictors. Machine Learning 24(2), 123–140

    Google Scholar 

  • Brockman, J. (2006). What we believe but cannot prove: Today's leading thinkers on science in the age of certainty. New York: HarperCollins

    Google Scholar 

  • Cherkassky, V., & Mulier, F. (1998). Learning from data: Concepts, theory, and methods. New York: Wiley

    Google Scholar 

  • Cortes, C., & Vapnik, V. (1995). Support vector networks. Machine Learning 20 273–297

    Google Scholar 

  • evore, J., & Farnum, N. (2005). Applied statistics for engineers and scientists. Belmont, CA: Thomson Learning

    Google Scholar 

  • Draper, N. R., & Smith, H. (1998). Applied regression analysis. New York: Wiley

    Google Scholar 

  • Efron, B. (1983). Estimating the error rate of a prediction rule: Improvement on cross-validation. Journal of the American Statistical Association 78 316–331

    Article  Google Scholar 

  • Efron, B., & Tibshirani, R. J. (1997). Improvements on cross-validation: The .632+ bootstrap method. Journal of the American Statistical Association 92 548–560

    Article  Google Scholar 

  • Efron, B., & Tibshirani, R. J. (1998). An introduction to the bootstrap. London: Chapman & Hall

    Google Scholar 

  • Frank, E., Hall, M., & Pfahringer, B. (2003). Locally weighted naive bayes. Proceedings of the conference on uncertainty in artificial intelligence. (pp. 249–256). Acapulco: Morgan Kaufmann

    Google Scholar 

  • Fu, W. J., Carroll, R. J., & Wang, S. (2005). Estimating mis-classification error with small samples via bootstrap cross-validation. Bioinformatics 21, 1979–1986

    Article  Google Scholar 

  • Goutte, C. (1997). Note on free lunches and cross-validation. Neural Computation 9 1211–1215

    Article  Google Scholar 

  • Hall, P., & Maiti, T. (2006). Nonparametric estimation of mean-squared prediction error in nested-error regression models. Annals of Statistics 34 1733–1750

    Article  Google Scholar 

  • Hastie, T., Tibshirani, R., & Friedman, J. (2001). The elements of statistical learning. Canada. Springer Series in Statistics

    Google Scholar 

  • Haupt, R., & Haupt, S. E. (1998). Practical genetic algorithms. New York: Wiley

    Google Scholar 

  • Hjorth, J. S. (1999). Computer intensive statistical methods: Validation, model selection, and bootstrap. Boca Raton, FL: CRC Press

    Google Scholar 

  • Jiang, L., Zhang, H., & Su, J. (2005). Learning K-nearest neighbor naive bayes for ranking. Proceedings of First International Conference on Advanced Data Mining and Applications (ADMA2005). Springer

    Google Scholar 

  • Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, 2(12), 1137–1143. (Morgan Kaufmann, San Mateo)

    Google Scholar 

  • Kuk, A. (1989). Double bootstrap estimation of variance under systematic sampling with probability proportional to size. Journal of Statistical Computing and Simulation 31 73– 82

    Article  Google Scholar 

  • MacKay, D. J. C. (1996). Bayesian methods for back-propagation networks. In E. Domany, J. L. van Hemmen, & K. Schulten (Eds.), Models of Neural Networks III (pp. 309). New York: Springer. Physics of neural network series

    Google Scholar 

  • Marzban, C. (1997). Local minima and bootstrapping. Available at http://faculty.washington.edu/marzban/local.pdf.

  • Marzban, C. (2000). A neural network for tornado diagnosis. Neural Computing and Applications 9(2), 133– 141

    Article  Google Scholar 

  • Marzban, C. (2004). Neural Network short course. Annual meeting of the American Meteorological Society, Seattle, WA. Available at http://faculty.washington.edu/marzban/ short_course.html.

  • Marzban, C., & Haupt, S. E. (2005). On genetic algorithms and discrete performance measures. Fourth Conference on Artificial Intelligence Applications to Environmental Science. New York: American Meteorological Society

    Google Scholar 

  • Masters, T. (1993). Practical neural network recipes in C++ (493 pp.). San Diego, CA: Academic

    Google Scholar 

  • Masters, T. (1995). Advanced algorithms for neural networks: A C++ sourcebook (431 pp.). New York: Wiley

    Google Scholar 

  • Rao, J. S., & Tibshirani, R. (1997). The out-of-bootstrap method for model averaging and selection. Technical report, May, Statistics Department, Stanford University. Available at http://www-stat.stanford.edu/tibs/ftp/outofbootstrap.ps

  • Richard, M. D., & Lippmann, R. P. (1991). Neural network classifiers estimate Bayesian a-posteriori probabilities. Neural Computation 3 461–483

    Article  Google Scholar 

  • Shao, J. (1993). Linear model selection via crossvalidation. Journal of the American Statistical Association 88(422), 486–494

    Article  Google Scholar 

  • Shao, J. (1996). Bootstrap Model Selection. Journal of the American Statistical Association 91(434), 655–665

    Article  Google Scholar 

  • Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society (Series B), 36 111–147

    Google Scholar 

  • Tian, B. L., Cai, T., Goetghebeur, E., & Wei, L. J. (2007). Model evaluationbased on the sampling distribution of estimated absolute prediction error. Biometrika 94(2), 297–311, Doi: 10.1093/biomet/asm036

    Article  Google Scholar 

  • Zhang, P. (1992). On the distributional properties of model selection criteria. Journal of the American Statistical Association 87(419), 732–737

    Article  Google Scholar 

  • Zucchini, W. (2000). An Introduction to model selection. Journal of Mathematical Psychology 44 41–61

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Caren Marzban .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer Science+Business Media B.V

About this chapter

Cite this chapter

Marzban, C. (2009). Basic Statistics and Basic AI: Neural Networks. In: Haupt, S.E., Pasini, A., Marzban, C. (eds) Artificial Intelligence Methods in the Environmental Sciences. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-9119-3_2

Download citation

Publish with us

Policies and ethics