Statistical decision-making is widely used in experimental earth sciences. The topic plays an even more important role in Environmental Sciences due to the time varying nature of a system under observation and the possible necessity to take corrective actions. A set of possible corrective actions is usually available in a decision-making situation. Such a set is also known as the set of decisions. A number of observations of physical attributes (or variables) would also be potentially available. It is desirable for the corrective action selected in a situation to minimize the damage or cost, or maximize the benefit. Considering that a cost is a negative benefit, scientists and practitioners develop a composite single criterion that should be minimized, for a given decision-making problem. A best decision, one that minimizes the composite cost criterion, is also known as an optimal decision.

The process of obtaining or collecting the values that the physical variables take in an event is also known by other names such as extracting features (or feature variables) and making measurements of the variables. The variables are also called by other names such as features, feature variables, and measurements. Among the many possible physical variables that might influence the decision, collecting some of them may pose challenges. There may be a cost, risk, or some other penalty associated with the process of collecting some of these variables. In some other cases, the time delay in obtaining the measurements may also add to the cost of decision-making. This may take the form of certain losses because a corrective action could not be implemented earlier due to the time delay in the measurement process. These costs should be included in the overall cost criterion. Therefore, the process of decision-making may also involve deciding whether or not to collect some of the measurements.

Keywords

Decision Tree Leaf Node Class Label Undirected Graph Child Node 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anderberg, M. R. (1973). Cluster analysis for applications. Academic: New YorkGoogle Scholar
  2. Ben-Bassat, M. (1982). Use of distance measures, information measures, and error bounds on feature evaluation. In P. R. Krishnaiah, & L. N. Kanal (Eds.), Classification, pattern recognition, and reduction of Dimensionality. Handbook of statistics(Vol. 2, pp. 773–791). Amsterdam: Elsevier ScienceGoogle Scholar
  3. Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and regression trees. Belmont, CA: Wadsworth International GroupGoogle Scholar
  4. Cannon, A. J., & Whitfield, P. H. (2002). ‘Synoptic map-pattern classification using recursive partitioning and principal component analysis’, Monthly Weather Review, 130, 1187–1206CrossRefGoogle Scholar
  5. Chai, B.-B., Zhuang, X., Zhao, Y., & Sklansky, J. (1996). Binary linear decision tree with genetic algorithm. Proceedings of 13th International Conference on Pattern Recognition(pp. 530–534). Los Alamitos, CA: IEEE Computer Society PressCrossRefGoogle Scholar
  6. Dattatreya, G. R., & Kanal, L. N. (1985). Decision trees in pattern recognition. In L. N. Kanal, & A. Rosenfeld (Eds.), Progress in pattern recognition(pp. 189–237). Amsterdam: North-HollandGoogle Scholar
  7. Duda, R. O., Hart, P. E., & Stork, D. G. (2001). Pattern classification. New York: Wiley-InterscienceGoogle Scholar
  8. Goel, P. K., Prasher, S. O., Patel, R. M., Landry, J. A., Bonnell, R. B., & Viau, A. A. (2003). Classification of hyperspectral data by decision trees and artificial neural networks to identify weed stress and nitrogen status of corn. Computers and Electronics in Agriculture, 39, 67–93CrossRefGoogle Scholar
  9. Han, J., & Kamber M. (2006). Data mining. San Francisco: Morgan KauffmanCrossRefGoogle Scholar
  10. Jordan, M. I., & Jacobs, R. A. (1994). Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6, 181–214CrossRefGoogle Scholar
  11. Kauffman, L., & Rousseeuw, P. J. (1990). Finding groups in data: An introduction to cluster analysis. New York: WileyGoogle Scholar
  12. Kim, B., & Koehler, G. J. (1994). An investigation on the conditions for pruning an induced binary tree. European Journal of Operations Research. 77, 82–95CrossRefGoogle Scholar
  13. Kononenko, I., & Hong, S. J. (1997). Attribute selection for modeling. Future Generation Computer Systems, 13, 181–195CrossRefGoogle Scholar
  14. Knowledge Discovery and Nuggets web-page, http://www.kdnuggets.com/software.html
  15. Li, X., & Claramunt, A. (2006). A spatial entropy-based decision tree classification of geographical information. Transactions in Geographical Information Systems, 10, 451–467Google Scholar
  16. Manago, M., & Kodratoff, Y. (1991). Induction of decision trees from complex structured data. In G. Piatetsky-Shapiro, & W. J. Frawley (Eds.), Knowledge discovery in databases(pp. 289–306). Cambridge, MA: AAAI/MIT PressGoogle Scholar
  17. Moret, B. M. E. (1982). Decision trees and diagrams. ACM Computing Surveys, 14, 593–623CrossRefGoogle Scholar
  18. Murthy, S. K. (1998). Automatic construction of decision trees from data: A multi-disciplinary survey. Data Mining and Knowledge Discovery, 2, 345–389CrossRefGoogle Scholar
  19. Payne, H., & Meisel, W. (1977). An algorithm for constructing optimal binary decision trees. IEEE Transactions on Computers, 26, 905–916CrossRefGoogle Scholar
  20. Payne, R. W., & Preece, D. A. (1980). Identification keys and diagnostic tables. Journal of Royal Statistical Society: Series A, 143, 253–292CrossRefGoogle Scholar
  21. Quinlan, J. R. (1990). Decision trees and decision-making. IEEE Transactions on Systems, Man, and Cybernetics, 20, 339–346CrossRefGoogle Scholar
  22. Quinlan, J. R. (1993). Programs for machine learning. San Francisco: Morgan KaufmannGoogle Scholar
  23. Rastogi, R., & Shim, K. (1998). Public: A decision tree classifier that integrates building and pruning. In A. Gupta, O. Shmueli, & J. Widom (Eds.), Proceedings of the 24th International Conference on Very Large Data Bases (pp. 404–415), San Francisco: Morgan KaufmannGoogle Scholar
  24. Russell, S., & Norvig, P. (2002). Artificial intelligence: A modern approach. Englewood Cliffs, NJ: Prentice-HallGoogle Scholar
  25. Safavian, S. R., & Landgrebe, D. (1991). A survey of decision tree classifier methodology. IEEE Transactions on Systems, Man, and Cybernetics, 21, 660–674CrossRefGoogle Scholar
  26. Wilson, P. F, Dell, L. D., & Anderson, G. F. (1993). Root cause analysis. American Society for Quality: Milwaukee,WIGoogle Scholar

Copyright information

© Springer Science+Business Media B.V 2009

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of Texas at DallasRichardson TexasUSA

Personalised recommendations