Don’t Compare Averages
We point out that for two sets of measurements, it can happen that the average of one set is larger than the average of the other set on one scale, but becomes smaller after a non-linear monotone transformation of the individual measurements. We show that the inclusion of error bars is no safeguard against this phenomenon. We give a theorem, however, that limits the amount of “reversal” that can occur; as a by-product we get two non-standard one-sided tail estimates for arbitrary random variables which may be of independent interest. Our findings suggest that in the not infrequent situation where more than one cost measure makes sense, there is no alternative other than to explicitly compare averages for each of them, much unlike what is common practice.
KeywordsCost Measure Monotone Transformation Hierarchical Dirichlet Process Neural Information Processings System Statistical Natural Language Processing
Unable to display preview. Download preview PDF.
- 2.Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Sharing clusters among related groups: Hierarchical dirichlet processes. In: Proceedings of the Advances in Neural Information Processings Systems Conference (NIPS 2004), MIT Press, Cambridge (2004)Google Scholar
- 4.Mori, S., Nagao, M.: A stochastic language model using dependency and its improvement by word clustering. In: Proceedings of the 17th international conference on Computational linguistics (COLING 1998), pp. 898–904. Association for Computational Linguistics (1998)Google Scholar
- 5.Grimmett, G., Stirzaker, D.: Probability and Random Processes. Oxford University Press, Oxford (1992)Google Scholar
- 9.Manku, G.S., Rajagopalan, S., Lindsay, B.G.: Approximate medians and other quantiles in one pass and with limited memory. In: Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD 1998), pp. 426–435 (1998)Google Scholar
- 10.Motulsky, H.: The link between error bars and statistical significance, http://www.graphpad.com/articles/errorbars.htm