The generally used Neyman-Pearson hypothesis testing is based on the particularly dangerous assumption in this case that one of the hypotheses tested is the true data-generating distribution. This means that the theory does not take into account the effect of having to fit the hypotheses as models to data, and hence whatever has been deduced must have a fundamental defect. The second fundamental flaw in the theory is that there is no rational quantified way to assess the confidence in the test result arrived. The common test is to decide between the favored null hypothesis and either a single opposing hypothesis or, more generally, one of an uncountable number of them, in the case of the so-called composite hypothesis. The null hypothesis would be abandoned only when the data fall in the so-called critical region, whose probability under the null hypothesis is less than a certain level of the test, such as the commonly used value .05. While clearly there is a considerable confidence in rejecting the null hypothesis when the data fall far in the tails, because the data then are not likely to be typical under the null hypothesis, we have little confidence in accepting it when the data fall outside the critical region. However, what is the point in putting a sharp boundary based only on such obvious and vague considerations? After all, an epsilon variation in the data can swing the decision one way or the other. This single unwarranted act undoubtedly has caused wrong real-life decisions with literally life-and-death implications.


Code Length Fisher Information Matrix ARMA Model Arithmetic Code Composite Hypothesis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media, LLC 2007

Personalised recommendations