Abstract
This study considers objective Bayesian testing for the linear combinations of the means of several normal populations. We propose solutions based on a Bayesian model selection procedure to this problem in which no subjective input is considered. We first construct suitable priors to test the linear combinations of means based on measuring the divergence between competing models (so-called divergence-based priors). Next, we derive the intrinsic priors for which the Bayes factors and model selection probabilities are well defined. Finally, the behavior of the Bayes factors based on the DB priors, intrinsic priors, and classical test are compared in a simulation study and an example.
Similar content being viewed by others
References
Bayarri MJ, García-Donato G (2007) Extending conventional priors for testing general hypotheses in linear model. Biometrika 94:135–152
Bayarri MJ, García-Donato G (2008) Generalization of Jeffreys divergence-based priors for Bayesian hypothesis testing. J R Stat Soc B 70:981–1003
Berger JO, Bernardo JM (1992) On the development of reference priors (with discussion). In: Bernardo JM et al (eds) Bayesian statistics IV. Oxford University Press, Oxford, pp 35–60
Berger JO, Pericchi LR (1996) The intrinsic Bayes factor for model selection and prediction. J Am Stat Assoc 91:109–122
Berger JO, Mortera J (1999) Default Bayes factors for nonnested hypothesis testing. J Am Stat Assoc 94:542–554
Bernardo JM, Rueda R (2002) Bayesian hypothesis testing: a reference approach. Int Stat Rev 70:351–372
Cox DR, Reid N (1987) Orthogonal parameters and approximate conditional inference (with discussion). J R Stat Soc B 49:1–39
Datta GS, Ghosh M (1995) Some remarks on noninformative priors. J Am Stat Assoc 90:1357–1363
De Santis F, Spezzaferri F (1999) Methods for default and robust Bayesian model comparison: the fractional Bayes factor approach. Int Stat Rev 67:267–286
García-Donato G, Sun D (2007) Objective priors for hypothesis testing in one-way random effects models. Can J Stat 35:303–320
Jeffreys H (1961) Theory of probability, 3rd edn. Oxford University Press, Oxford
Kass RE, Vaidyanathan S (1992) Approximate Bayes factors and orthogonal parameters, with application to testing equality of two binomial proportions. J R Stat Soc Ser B 54:129–144
Kim DH, Kang SG, Lee WD (2006) Noninformative priors for linear combinations of the normal means. Stat Pap 47:249–262
Kim DH, Kang SG, Lee WD (2015) Objective Bayesian hypothesis testing for the intraclass correlation coefficient in the random effects model (unpublished manuscript)
Montgomery DC (1991) Design and analysis of experiments, 8th edn. Wiley, New York
Moreno E (1997) Bayes factor for intrinsic and fractional priors in nested models: Bayesian robustness. In: Yadolah D (ed) L1-statistical procedures and related topics, 31. Institute of Mathematical Statistics, Hayward, pp 257–270
Moreno E, Bertolino F, Racugno W (1998) An intrinsic limiting procedure for model selection and hypotheses testing. J Am Stat Assoc 93:1451–1460
Moreno E, Bertolino F, Racugno W (1999) Default Bayesian analysis of the Behrens–Fisher problem. J Stat Plan Inference 81:323–333
Mukerjee R, Ghosh M (1997) Second order probability matching priors. Biometrika 84:970–975
Pérez J, Berger JO (2002) Expected posterior prior distributions for model selection. Biometrika 89:491–512
Pérez S (2005) Objective Bayesian methods for mean comparison, PhD Thesis, Department of Statistics, University of Valencia
Zellner A, Siow A (1980) Posterior odds ratio for selected regression hypotheses. In: Bernardo JM et al (eds) Bayesian statistics 1. University Press, Valencia, pp 585–603
Zellner A, Siow A (1984) Basic issues in econometrics. University of Chicago Press, Chicago
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Proof of Theorem 1
Consider model \(M_1\),
and model \(M_2\)
Let \({\varvec{\theta }}=\theta _1\) and \({\varvec{\nu }}=(\theta _2,\ldots ,\theta _{k+1})\). Then, Kullback–Leibler-directed divergence \(KL[({\varvec{\theta }}_0,{\varvec{\nu }}):({\varvec{\theta }},{\varvec{\nu }})]\) is given by
where \(g(\gamma _i)={{\gamma _i/n_i} \over {\sum _{j=1}^k\gamma _j^2/n_j}}\) and \(n=n_1+\ldots +n_k\). Moreover, Kullback–Leibler-directed divergence \(KL[({\varvec{\theta }},{\varvec{\nu }}):({\varvec{\theta }}_0,{\varvec{\nu }})]\) is given by
Therefore, the sum divergence measure is
Further, since \(KL[({\varvec{\theta }}_0,{\varvec{\nu }}):({\varvec{\theta }},{\varvec{\nu }})]\) and \(KL[({\varvec{\theta }},{\varvec{\nu }}):({\varvec{\theta }}_0,{\varvec{\nu }})]\) are the same, the minimum divergence measure is the same as the sum divergence measure. We take the effective sample size \(n^*=n\). Then, the unitary symmetrized divergence is
Now,
if \(q> {1\over 2}\). Thus, the conditional sum-DB prior with \(q_*^S=1\) is given by
This proves Theorem 1.
Appendix 2: Proof of Theorem 2
The likelihood function under model \(M_1\) is
where \(Y_{i\cdot }=\sum _{j=1}^{n_i} Y_{ij} /n_i\), \(S_i^2=\sum _{j=1}^{n_i}(Y_{ij}-Y_{i\cdot })^2, i=1,\ldots ,k,\)\(T_{11}^2=n_1(Y_{1\cdot }- {{\gamma _1\theta _{10}/n_1} \over {\sum _{j=1}^k \gamma _j^2/n_j}} - \sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i)^2\), and \(T_{1i}^2=n_i(Y_{i\cdot }- {{\gamma _i\theta _{10}/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}} +\theta _i)^2, i=2,\ldots ,k\). In addition, under model \(M_1\), the reference prior for \((\theta _2,\ldots ,\theta _{k+1})\) is
Then, from the likelihood (24) and reference prior (25), \(m_1(\mathbf {x})\) is given by
where \(n=n_1+\cdot +n_k\), \(c_1=n_1\), \(c_i=[c_{i-1}{\gamma _i^2\over \gamma _1^2}+n_i]^{-1}c_{i-1}n_i, i\ge 2\). For model \(M_2\), the reference prior for \((\theta _1,\ldots ,\theta _{k+1})\) is
The likelihood function under model \(M_2\) is
where \(T_{21}^2=n_1(Y_{1\cdot }-{{\gamma _1\theta _{1}/n_1} \over {\sum _{j=1}^k \gamma _j^2/n_j}} - \sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i)^2\) and \(T_{2i}^2=n_i(Y_{i\cdot }- {{\gamma _i\theta _{1}/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}} +\theta _i)^2, i=2,\ldots ,k\). Let
Then, from the likelihood (27) and reference prior (26), \(m_2(\mathbf {x})\) is given as follows:
Therefore, \(B_{21}^N\) is given by
where
and
Further, \(\pi ^N(\theta _1\vert \theta _2,\ldots ,\theta _{k+1})=1\). Hence, Theorem 2 is proven. \(\square \)
Appendix 3: Proof of Lemma 1
Consider the models \(M_1\),
and \(M_2\),
where \(c_1\) and \(c_2\) are arbitrary positive constants. For the minimal training sample \(z_i(l)\), we have
where \(g(\gamma _i)={{\gamma _i/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}}\) and \( m_2^N (z_i(l))={c_2 |\gamma _1|\over |x_{i1}-x_{i2}|}\). Let \(u=x_{i1}-x_{i2}\) and \(v=x_{i1}+x_{i2}\). Then, by direct integration on \(x_{l1}\) and (v, u), we obtain
Therefore,
Finally, we can similarly derive the intrinsic prior for the other minimal training sample. This proves Lemma 1. \(\square \)
Appendix 4: Proof of Theorem 4
We compute the Bayes factor to compare model \(M_1\) and model \(M_2\) with the intrinsic prior \(\pi _{2i}^I(\theta _2)\). We easily obtain the marginal density \(m_1(\mathbf{x})\) from Theorem 2. Hence, we only compute the marginal density \(m_2(\mathbf{x})\). Now,
Let
By integrating with respect to \(\mu _i, i=1,\ldots ,k\), and \(\tau _i, i=2,\ldots ,k\) in (29), we get
where \(p_1={n_1\over 2n_1(1+{\tau _{k+1}\over \theta _{k+1}})+2}\), \(p_i=[p_{i-1}{\gamma _i^2\over \gamma _1^2}+d_i]^{-1}p_{i-1}d_i\), \(d_i={n_i\over n_i(1+{\tau _{k+1}\over \theta _{k+1}})+2}\), \(d_{j\ne i}={n_j\over 2n_j(1+{\tau _{k+1}\over \theta _{k+1}})+2}\). Let \(\theta _{k+1}=\theta _{k+1}\) and \(\omega =\tau _{k+1}/\theta _{k+1}\). By integrating with respect to \(\theta _{k+1}\) in (30), we get
Finally, we can similarly compute the marginal density by using other intrinsic priors. Hence, Theorem 4 is proven.
Rights and permissions
About this article
Cite this article
Lee, W.D., Kang, S.G. & Kim, Y. Objective Bayesian testing for the linear combinations of normal means. Stat Papers 60, 147–172 (2019). https://doi.org/10.1007/s00362-016-0831-2
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-016-0831-2