Skip to main content
Log in

Objective Bayesian testing for the linear combinations of normal means

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

This study considers objective Bayesian testing for the linear combinations of the means of several normal populations. We propose solutions based on a Bayesian model selection procedure to this problem in which no subjective input is considered. We first construct suitable priors to test the linear combinations of means based on measuring the divergence between competing models (so-called divergence-based priors). Next, we derive the intrinsic priors for which the Bayes factors and model selection probabilities are well defined. Finally, the behavior of the Bayes factors based on the DB priors, intrinsic priors, and classical test are compared in a simulation study and an example.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Bayarri MJ, García-Donato G (2007) Extending conventional priors for testing general hypotheses in linear model. Biometrika 94:135–152

    Article  MathSciNet  MATH  Google Scholar 

  • Bayarri MJ, García-Donato G (2008) Generalization of Jeffreys divergence-based priors for Bayesian hypothesis testing. J R Stat Soc B 70:981–1003

    Article  MathSciNet  Google Scholar 

  • Berger JO, Bernardo JM (1992) On the development of reference priors (with discussion). In: Bernardo JM et al (eds) Bayesian statistics IV. Oxford University Press, Oxford, pp 35–60

    Google Scholar 

  • Berger JO, Pericchi LR (1996) The intrinsic Bayes factor for model selection and prediction. J Am Stat Assoc 91:109–122

    Article  MathSciNet  MATH  Google Scholar 

  • Berger JO, Mortera J (1999) Default Bayes factors for nonnested hypothesis testing. J Am Stat Assoc 94:542–554

    Article  MathSciNet  MATH  Google Scholar 

  • Bernardo JM, Rueda R (2002) Bayesian hypothesis testing: a reference approach. Int Stat Rev 70:351–372

    Article  MATH  Google Scholar 

  • Cox DR, Reid N (1987) Orthogonal parameters and approximate conditional inference (with discussion). J R Stat Soc B 49:1–39

    MATH  Google Scholar 

  • Datta GS, Ghosh M (1995) Some remarks on noninformative priors. J Am Stat Assoc 90:1357–1363

    Article  MathSciNet  MATH  Google Scholar 

  • De Santis F, Spezzaferri F (1999) Methods for default and robust Bayesian model comparison: the fractional Bayes factor approach. Int Stat Rev 67:267–286

    Article  MATH  Google Scholar 

  • García-Donato G, Sun D (2007) Objective priors for hypothesis testing in one-way random effects models. Can J Stat 35:303–320

    Article  MathSciNet  MATH  Google Scholar 

  • Jeffreys H (1961) Theory of probability, 3rd edn. Oxford University Press, Oxford

    MATH  Google Scholar 

  • Kass RE, Vaidyanathan S (1992) Approximate Bayes factors and orthogonal parameters, with application to testing equality of two binomial proportions. J R Stat Soc Ser B 54:129–144

    MathSciNet  MATH  Google Scholar 

  • Kim DH, Kang SG, Lee WD (2006) Noninformative priors for linear combinations of the normal means. Stat Pap 47:249–262

    Article  MathSciNet  MATH  Google Scholar 

  • Kim DH, Kang SG, Lee WD (2015) Objective Bayesian hypothesis testing for the intraclass correlation coefficient in the random effects model (unpublished manuscript)

  • Montgomery DC (1991) Design and analysis of experiments, 8th edn. Wiley, New York

    MATH  Google Scholar 

  • Moreno E (1997) Bayes factor for intrinsic and fractional priors in nested models: Bayesian robustness. In: Yadolah D (ed) L1-statistical procedures and related topics, 31. Institute of Mathematical Statistics, Hayward, pp 257–270

    Chapter  Google Scholar 

  • Moreno E, Bertolino F, Racugno W (1998) An intrinsic limiting procedure for model selection and hypotheses testing. J Am Stat Assoc 93:1451–1460

    Article  MathSciNet  MATH  Google Scholar 

  • Moreno E, Bertolino F, Racugno W (1999) Default Bayesian analysis of the Behrens–Fisher problem. J Stat Plan Inference 81:323–333

    Article  MathSciNet  MATH  Google Scholar 

  • Mukerjee R, Ghosh M (1997) Second order probability matching priors. Biometrika 84:970–975

    Article  MathSciNet  MATH  Google Scholar 

  • Pérez J, Berger JO (2002) Expected posterior prior distributions for model selection. Biometrika 89:491–512

    Article  MathSciNet  MATH  Google Scholar 

  • Pérez S (2005) Objective Bayesian methods for mean comparison, PhD Thesis, Department of Statistics, University of Valencia

  • Zellner A, Siow A (1980) Posterior odds ratio for selected regression hypotheses. In: Bernardo JM et al (eds) Bayesian statistics 1. University Press, Valencia, pp 585–603

    Google Scholar 

  • Zellner A, Siow A (1984) Basic issues in econometrics. University of Chicago Press, Chicago

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongku Kim.

Appendices

Appendix 1: Proof of Theorem 1

Consider model \(M_1\),

$$\begin{aligned} M_1: f_1(\mathbf{x}\vert \theta _2,\ldots ,\theta _{k+1})= & {} N\left( \mathbf{x}_1\vert {{\gamma _1\theta _{10}/n_1} \over {\sum _{j=1}^k \gamma _j^2/n_j}}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i,\theta _{k+1}\right) \nonumber \\&\qquad \times \prod _{i=2}^k N\left( \mathbf{x}_i \vert {{\gamma _i\theta _{10}/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}} -\theta _i,\theta _{k+1}\right) \end{aligned}$$
(22)

and model \(M_2\)

$$\begin{aligned} M_2: f_2(\mathbf{x}\vert \theta _{1},\ldots ,\theta _{k+1})= & {} N\left( \mathbf{x}_1\vert {{\gamma _1\theta _1/n_1} \over {\sum _{j=1}^k \gamma _j^2/n_j}}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i,\theta _{k+1}\right) \nonumber \\&\qquad \times \prod _{i=2}^k N\left( \mathbf{x}_i \vert {{\gamma _i\theta _1/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}} -\theta _i,\theta _{k+1}\right) . \end{aligned}$$
(23)

Let \({\varvec{\theta }}=\theta _1\) and \({\varvec{\nu }}=(\theta _2,\ldots ,\theta _{k+1})\). Then, Kullback–Leibler-directed divergence \(KL[({\varvec{\theta }}_0,{\varvec{\nu }}):({\varvec{\theta }},{\varvec{\nu }})]\) is given by

$$\begin{aligned}&KL[({\varvec{\theta }}_0,{\varvec{\nu }}):({\varvec{\theta }},{\varvec{\nu }})] \\&= \int \log \left( {N\left( \mathbf{x}_1\vert g(\gamma _1)\theta _1+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i,\theta _{k+1}\right) \prod _{i=2}^k N\left( \mathbf{x}_i \vert g(\gamma _i)\theta _1 -\theta _i,\theta _{k+1}\right) \over N\left( \mathbf{x}_1\vert g(\gamma _1)\theta _{10}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i,\theta _{k+1}\right) \prod _{i=2}^k N\left( \mathbf{x}_i \vert g(\gamma _i)\theta _{10}-\theta _i,\theta _{k+1}\right) } \right) \\&\quad \quad \times N\left( \mathbf{x}_1\vert g(\gamma _1)\theta _1+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i,\theta _{k+1}\right) \prod _{i=2}^k N\left( \mathbf{x}_i \vert g(\gamma _i)\theta _1 -\theta _i,\theta _{k+1}\right) d\mathbf{x}\\&={1\over \sum _{j=1}^k\gamma _j^2/n_j}{(\theta _1-\theta _{10})^2\over 2\theta _{k+1}}, \end{aligned}$$

where \(g(\gamma _i)={{\gamma _i/n_i} \over {\sum _{j=1}^k\gamma _j^2/n_j}}\) and \(n=n_1+\ldots +n_k\). Moreover, Kullback–Leibler-directed divergence \(KL[({\varvec{\theta }},{\varvec{\nu }}):({\varvec{\theta }}_0,{\varvec{\nu }})]\) is given by

$$\begin{aligned}&KL[({\varvec{\theta }},{\varvec{\nu }}):({\varvec{\theta }}_0,{\varvec{\nu }})] \\&= \int \log \left( {N\left( \mathbf{x}_1\vert g(\gamma _1)\theta _{10}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i,\theta _{k+1}\right) \prod _{i=2}^k N\left( \mathbf{x}_i \vert g(\gamma _i)\theta _{10} -\theta _i,\theta _{k+1}\right) \over N\left( \mathbf{x}_1\vert g(\gamma _1)\theta _{1}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i,\theta _{k+1}\right) \prod _{i=2}^k N\left( \mathbf{x}_i \vert g(\gamma _i)\theta _1-\theta _i,\theta _{k+1}\right) } \right) \\&\quad \quad \times N\left( \mathbf{x}_1\vert g(\gamma _1)\theta _{10}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i,\theta _{k+1}\right) \prod _{i=2}^k N\left( \mathbf{x}_i \vert g(\gamma _i)\theta _{10} -\theta _i,\theta _{k+1}\right) d\mathbf{x}\\&={1\over \sum _{j=1}^k\gamma _j^2/n_j}{(\theta _1-\theta _{10})^2\over 2\theta _{k+1}}. \end{aligned}$$

Therefore, the sum divergence measure is

$$\begin{aligned} D^S[({\varvec{\theta }},{\varvec{\theta }}_0)\vert {\varvec{\nu }})] = {1\over \sum _{j=1}^k\gamma _j^2/n_j}{(\theta _1-\theta _{10})^2\over \theta _{k+1}}. \end{aligned}$$

Further, since \(KL[({\varvec{\theta }}_0,{\varvec{\nu }}):({\varvec{\theta }},{\varvec{\nu }})]\) and \(KL[({\varvec{\theta }},{\varvec{\nu }}):({\varvec{\theta }}_0,{\varvec{\nu }})]\) are the same, the minimum divergence measure is the same as the sum divergence measure. We take the effective sample size \(n^*=n\). Then, the unitary symmetrized divergence is

$$\begin{aligned} \bar{D}^S[({\varvec{\theta }},{\varvec{\theta }}_0)\vert {\varvec{\nu }})]= {1\over n\sum _{j=1}^k\gamma _j^2/n_j}{(\theta _1-\theta _{10})^2\over \theta _{k+1}}. \end{aligned}$$

Now,

$$\begin{aligned} c_S(q,{\varvec{\nu }})= & {} \int (1+\bar{D}^S[({\varvec{\theta }},{\varvec{\theta }}_0) \vert {\varvec{\nu }}])^{-q}\pi ^N({\varvec{\theta }}\vert {\varvec{\nu }})d{\varvec{\theta }}\\= & {} \int _{0}^{\infty }\left[ 1+ {1\over n\sum _{j=1}^k\gamma _j^2/n_j}{(\theta _1-\theta _{10})^2\over \theta _{k+1}} \right] ^{-q}d\theta _1\\= & {} {\Gamma (q-{1\over 2})\Gamma ({1\over 2})\over \Gamma (q)}\theta _{k+1}^{1\over 2} \left( n\sum _{j=1}^k\gamma _j^2/n_j\right) ^{1\over 2}<\infty , \end{aligned}$$

if \(q> {1\over 2}\). Thus, the conditional sum-DB prior with \(q_*^S=1\) is given by

$$\begin{aligned}&\pi ^S(\theta _1\vert \theta _2,\ldots ,\theta _{k+1}) = \pi ^{-1}\theta _{k+1}^{-{1\over 2}}\left( n\sum _{j=1}^k\gamma _j^2/n_j\right) ^{-{1\over 2}}\nonumber \\&\quad \left[ 1+ {1\over n\sum _{j=1}^k\gamma _j^2/n_j}{(\theta _1-\theta _{10})^2\over \theta _{k+1}} \right] ^{-1}. \end{aligned}$$

This proves Theorem 1.

Appendix 2: Proof of Theorem 2

The likelihood function under model \(M_1\) is

$$\begin{aligned} L_1(\theta _2,\ldots ,\theta _{k+1}\vert \mathbf {x} ) = \left[ \prod _{i=1}^k \left( {1\over 2\pi \theta _{k+1}}\right) ^{n_i\over 2}\right] \exp \left\{ -\sum _{i=1}^k {1\over 2\theta _{k+1}} (S_i^2+T_{1i}^2) \right\} , \end{aligned}$$
(24)

where \(Y_{i\cdot }=\sum _{j=1}^{n_i} Y_{ij} /n_i\), \(S_i^2=\sum _{j=1}^{n_i}(Y_{ij}-Y_{i\cdot })^2, i=1,\ldots ,k,\)\(T_{11}^2=n_1(Y_{1\cdot }- {{\gamma _1\theta _{10}/n_1} \over {\sum _{j=1}^k \gamma _j^2/n_j}} - \sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i)^2\), and \(T_{1i}^2=n_i(Y_{i\cdot }- {{\gamma _i\theta _{10}/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}} +\theta _i)^2, i=2,\ldots ,k\). In addition, under model \(M_1\), the reference prior for \((\theta _2,\ldots ,\theta _{k+1})\) is

$$\begin{aligned} \pi _1^N(\theta _2,\ldots ,\theta _{k+1}) \propto \theta _{k+1}^{-1}. \end{aligned}$$
(25)

Then, from the likelihood (24) and reference prior (25), \(m_1(\mathbf {x})\) is given by

where \(n=n_1+\cdot +n_k\), \(c_1=n_1\), \(c_i=[c_{i-1}{\gamma _i^2\over \gamma _1^2}+n_i]^{-1}c_{i-1}n_i, i\ge 2\). For model \(M_2\), the reference prior for \((\theta _1,\ldots ,\theta _{k+1})\) is

$$\begin{aligned} \pi ^N (\theta _1,\ldots ,\theta _{k+1}) \propto \theta _{k+1}^{-1}. \end{aligned}$$
(26)

The likelihood function under model \(M_2\) is

$$\begin{aligned} L_2(\theta _1,\ldots ,\theta _{k+1}\vert \mathbf {x} ) = \left[ \prod _{i=1}^k \left( {1\over 2\pi \theta _{k+1}}\right) ^{n_i\over 2}\right] \exp \left\{ -\sum _{i=1}^k {1\over 2\theta _{k+1}} (S_i^2+T_{2i}^2) \right\} , \end{aligned}$$
(27)

where \(T_{21}^2=n_1(Y_{1\cdot }-{{\gamma _1\theta _{1}/n_1} \over {\sum _{j=1}^k \gamma _j^2/n_j}} - \sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i)^2\) and \(T_{2i}^2=n_i(Y_{i\cdot }- {{\gamma _i\theta _{1}/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}} +\theta _i)^2, i=2,\ldots ,k\). Let

$$\begin{aligned} \mu _1= {{(\gamma _1/n_1)}\theta _1 \over {\sum _{j=1}^k \gamma _j^2/n_j}} +\sum _{i=2}^k {\gamma _i \over \gamma _1}\theta _i, ~\mu _i = {{(\gamma _i/n_i)}\theta _1 \over {\sum _{j=1}^k \gamma _j^2/n_j}} -\theta _i, i=2,\ldots ,k, ~\sigma ^2=\theta _{k+1}. \end{aligned}$$

Then, from the likelihood (27) and reference prior (26), \(m_2(\mathbf {x})\) is given as follows:

Therefore, \(B_{21}^N\) is given by

$$\begin{aligned} B_{21} ^{N}={g_2(\mathbf{x})\over g_1(\mathbf{x})}, \end{aligned}$$
(28)

where

$$\begin{aligned} g_2(\mathbf{x})=(2\pi )^{-{n-k\over 2}}\left[ \prod _{i=1}^{k}n_i^{-{1\over 2}}\right] \Gamma \left( {n-k\over 2}\right) \left[ {\sum _{i=1}^k S_i^2 \over 2}\right] ^{-{n-k\over 2}}|\gamma _1| \end{aligned}$$

and

$$\begin{aligned} g_1(\mathbf{x})= & {} (2\pi )^{-{n-k+1\over 2}}\left[ \prod _{i=2}^{k} \left( c_{i-1}{\gamma _i^2\over \gamma _1^2}+n_i\right) ^{-{1\over 2}}\right] \Gamma \left( {n-k+1\over 2}\right) \\&\quad \times \left[ {\sum _{i=1}^k S_i^2 +\gamma _1^{-2}c_k(\sum _{i=1}^k \gamma _i\bar{y}_i-\theta _{10})^2\over 2}\right] ^{-{n-k+1\over 2}}. \end{aligned}$$

Further, \(\pi ^N(\theta _1\vert \theta _2,\ldots ,\theta _{k+1})=1\). Hence, Theorem 2 is proven. \(\square \)

Appendix 3: Proof of Lemma 1

Consider the models \(M_1\),

$$\begin{aligned} M_1: f_1(\mathbf{x}\vert {\varvec{\theta }}_1)= & {} N\left( \mathbf{x}_1\vert {{\gamma _1\theta _{10}/n_1} \over {\sum _{j=1}^k \gamma _j^2/n_j}}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\tau _i,\tau _{k+1}\right) \\&\quad \times \prod _{i=2}^k N\left( \mathbf{x}_i \vert {{\gamma _i\theta _{10}/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}} -\tau _i,\tau _{k+1}\right) , \pi ^N(\tau _2,\ldots ,\tau _{k+1})=c_1\tau _{k+1}^{-1} \end{aligned}$$

and \(M_2\),

$$\begin{aligned} M_2: f_2(\mathbf{x}\vert {\varvec{\theta }}_2)= & {} N\left( \mathbf{x}_1\vert {{\gamma _1\theta _1/n_1} \over {\sum _{j=1}^k \gamma _j^2/n_j}}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i,\theta _{k+1}\right) \\&\quad \quad \times \prod _{i=2}^k N\left( \mathbf{x}_i \vert {{\gamma _i\theta _1/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}} {-}\theta _i,\theta _{k{+}1}\right) , \pi ^N(\theta _{1},\ldots ,\theta _{k+1})=c_2\theta _{k+1}^{-1}, \end{aligned}$$

where \(c_1\) and \(c_2\) are arbitrary positive constants. For the minimal training sample \(z_i(l)\), we have

$$\begin{aligned} B_{12}^N (z_i(l))= & {} {1\over m_2^N (z_i(l))} \left[ N\left( x_{1j}\vert g(\gamma _1)\theta _{10}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\tau _i,\tau _{k+1}\right) \right. \\&\quad \quad \times \left. \prod _{j=1}^{2} N\left( x_{ij}\vert g(\gamma _i)\theta _{10} -\tau _i,\tau _{k+1}\right) \times \prod _{l\ne i} N\left( x_{l1}\vert g(\gamma _l)\theta _{10} -\tau _l,\tau _{k+1}\right) \right] , \end{aligned}$$

where \(g(\gamma _i)={{\gamma _i/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}}\) and \( m_2^N (z_i(l))={c_2 |\gamma _1|\over |x_{i1}-x_{i2}|}\). Let \(u=x_{i1}-x_{i2}\) and \(v=x_{i1}+x_{i2}\). Then, by direct integration on \(x_{l1}\) and (vu), we obtain

$$\begin{aligned}&E_{z_i(l)\vert {\varvec{\theta }}_2}^{M_2} [ B_{12}^N (z_i(l))]\\&= \int {1\over 2\pi }\tau _{k+1}^{-{1\over 2}}\theta _{k+1}^{-{1\over 2}} \exp \left\{ -{(x_{11}-g(\gamma _1)\theta _{10}-\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\tau _i)^2\over 2\tau _{k+1}} {-}{(x_{11}-g(\gamma _1)\theta _{1} {-}\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i)^2\over 2\theta _{k+1}} \right\} dx_{11}\\&\quad \times \int \int {1\over c_2|\gamma _1|}{1\over (2\pi )^2 }\tau _{k+1}^{-1}\theta _{k+1}^{-1} |x_{i1}-x_{i2}| \exp \left\{ -{(x_{i1}-g(\gamma _i)\theta _{10} +\tau _i)^2+(x_{i2}-g(\gamma _i)\theta _{10} +\tau _i)^2\over 2\tau _{k+1}}\right. \\&\left. \quad -{(x_{i1}-g(\gamma _i)\theta _{1} +\theta _i)^2+(x_{i2}-g(\gamma _i)\theta _{1}+\theta _i)^2\over 2\theta _{k+1}}\right\} dx_{i1}dx_{i2}\\&\quad \times \prod _{l\ne i}\int {1\over 2\pi }\tau _{k+1}^{-{1\over 2}}\theta _{k+1}^{-{1\over 2}} \exp \left\{ -{(x_{l1}-g(\gamma _l)\theta _{10} +\tau _l)^2\over 2\tau _{k+1}} -{(x_{l1}-g(\gamma _l)\theta _{1} +\theta _l)^2\over 2\theta _{k+1}} \right\} dx_{l1}\\&={1\over c_2|\gamma _1|}{1\over \pi }{\tau _{k+1}^{1\over 2}\theta _{k+1}^{1\over 2}\over (\theta _{k+1}+\tau _{k+1})}\\&\quad \times {1\over \sqrt{2\pi }(\tau _{k+1}+\theta _{k+1})^{1\over 2}} \exp \left\{ {(g(\gamma _1)\theta _{1}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i -g(\gamma _1)\theta _{10}-\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\tau _i)^2\over 2(\tau _{k+1}+\theta _{k+1})}\right\} \\&\quad \times {1\over \sqrt{\pi }(\tau _{k+1}+\theta _{k+1})^{1\over 2}} \exp \left\{ -{(\theta _i-g(\gamma _i)\theta _{1} -\tau _i+g(\gamma _i)\theta _{10})^2\over \tau _{k+1}+\theta _{k+1}}\right\} \\&\quad \times \prod _{l\ne i}{1\over \sqrt{2\pi }(\tau _{k+1}+\theta _{k+1})^{1\over 2}} \exp \left\{ -{(\theta _l-g(\gamma _l)\theta _{1} -\tau _l+g(\gamma _l)\theta _{10})^2\over 2(\tau _{k+1}+\theta _{k+1})}\right\} . \end{aligned}$$

Therefore,

Finally, we can similarly derive the intrinsic prior for the other minimal training sample. This proves Lemma 1. \(\square \)

Appendix 4: Proof of Theorem 4

We compute the Bayes factor to compare model \(M_1\) and model \(M_2\) with the intrinsic prior \(\pi _{2i}^I(\theta _2)\). We easily obtain the marginal density \(m_1(\mathbf{x})\) from Theorem 2. Hence, we only compute the marginal density \(m_2(\mathbf{x})\). Now,

$$\begin{aligned}&f_2(\mathbf {x} \vert {\varvec{\theta }}_2) \pi _{2i}^I({\varvec{\theta }}_2\vert {\varvec{\theta }}_1)\pi _1^N({\varvec{\theta }}_1)= {c_1\over \pi |\gamma _1|}{\tau _{k+1}^{-{1\over 2}}\over \theta _{k+1}^{1\over 2}(\theta _{k+1}+\tau _{k+1})}\nonumber \\&\quad \times \, N\left( \mathbf{x}_1\vert {{\gamma _1\theta _1/n_1} \over {\sum _{j=1}^k \gamma _j^2/n_j}}+\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i,\theta _{k+1}\right) \prod _{i=2}^k N\left( \mathbf{x}_i \vert {{\gamma _i\theta _1/n_i} \over {\sum _{j=1}^k \gamma _j^2/n_j}} -\theta _i,\theta _{k+1}\right) \nonumber \\&\quad \times \, N\left( {{\gamma _1\theta _1/n_1} \over {\sum _{j=1}^k\gamma _j^2/n_j}}\vert {{\gamma _1\theta _{10}/n_1}\over {\sum _{j=1}^k\gamma _j^2/n_j}} +\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\tau _i-\sum _{i=2}^{k}{\gamma _i\over \gamma _1}\theta _i, \theta _{k+1}+\tau _{k+1}\right) \nonumber \\&\quad \times \, N\left( \theta _{i}\vert \tau _i-{{\gamma _i\theta _{10}/n_i}\over {\sum _{j=1}^k\gamma _j^2/n_j}} +{{\gamma _i\theta _1/n_i}\over {\sum _{j=1}^k\gamma _j^2/n_j}}, {\theta _{k+1}+\tau _{k+1}\over 2}\right) \nonumber \\&\quad \times \, \prod _{l\ne i\ge 2}N\left( \theta _{l}\vert \tau _l-{{\gamma _l\theta _{10}/n_l}\over {\sum _{j=1}^k\gamma _j^2/n_j}} +{{\gamma _l\theta _1/n_l}\over {\sum _{j=1}^k\gamma _j^2/n_j}}, {\theta _{k+1}+\tau _{k+1}}\right) . \end{aligned}$$
(29)

Let

$$\begin{aligned} \mu _1= {{(\gamma _1/n_1)}\theta _1 \over {\sum _{j=1}^k \gamma _j^2/n_j}} +\sum _{i=2}^k {\gamma _i \over \gamma _1}\theta _i, ~\mu _i = {{(\gamma _i/n_i)}\theta _1 \over {\sum _{j=1}^k \gamma _j^2/n_j}} -\theta _i, i=2,\ldots ,k. \end{aligned}$$

By integrating with respect to \(\mu _i, i=1,\ldots ,k\), and \(\tau _i, i=2,\ldots ,k\) in (29), we get

$$\begin{aligned}&\pi ^{-1}(2\pi )^{-{n-k+1\over 2}}2^{-{k-2\over 2}} \left[ \prod _{i=2}^{k}\left( p_{i-1}{\gamma _i^2\over \gamma _1^2}+d_i\right) ^{-{1\over 2}}\right] \theta _{k+1}^{-{n-k+1\over 2}}{\tau _{k+1}^{-{1\over 2}}\over \theta _{k+1}^{1\over 2}(\theta _{k+1}+\tau _{k+1})}\nonumber \\&\quad \times \left[ n_i\left( 1+{\tau _{k+1}\over \theta _{k+1}}\right) +2\right] ^{-{1\over 2}} \prod _{j\ne i}\left[ n_j\left( 1+{\tau _{k+1}\over \theta _{k+1}}\right) +1\right] ^{-{1\over 2}}\nonumber \\&\quad \times \exp \left\{ -{1\over \theta _{k+1}}\left[ {\sum _{i=1}^k s_i^2\over 2} +\gamma _1^{-2}p_k\left( \sum _{i=1}^k \gamma _i\bar{y}_i-\theta _{10}\right) ^2\right] \right\} , \end{aligned}$$
(30)

where \(p_1={n_1\over 2n_1(1+{\tau _{k+1}\over \theta _{k+1}})+2}\), \(p_i=[p_{i-1}{\gamma _i^2\over \gamma _1^2}+d_i]^{-1}p_{i-1}d_i\), \(d_i={n_i\over n_i(1+{\tau _{k+1}\over \theta _{k+1}})+2}\), \(d_{j\ne i}={n_j\over 2n_j(1+{\tau _{k+1}\over \theta _{k+1}})+2}\). Let \(\theta _{k+1}=\theta _{k+1}\) and \(\omega =\tau _{k+1}/\theta _{k+1}\). By integrating with respect to \(\theta _{k+1}\) in (30), we get

$$\begin{aligned} m_2(\mathbf x)= & {} \int _{0}^{\infty } \pi ^{-1}(2\pi )^{-{n-k+1\over 2}}2^{-{k-2\over 2}}\Gamma \left( {n-k+1\over 2}\right) \left[ \prod _{i=2}^{k}\left( p_{i-1}{\gamma _i^2\over \gamma _1^2}+d_i\right) ^{-{1\over 2}}\right] \nonumber \\&\quad \times \left[ {\sum _{i=1}^k S_i^2\over 2} +\gamma _1^{-2}p_k(\sum _{i=1}^k \gamma _i\bar{y}_i-\theta _{10})^2\right] ^{-{n-k+1\over 2}}\nonumber \\&\quad \times (1+\omega )^{-1}\omega ^{-{1\over 2}}[n_i(1+\omega )+2]^{-{1\over 2}} \prod _{j\ne i}[n_j(1+\omega )+1]^{-{1\over 2}}d\omega . \end{aligned}$$
(31)

Finally, we can similarly compute the marginal density by using other intrinsic priors. Hence, Theorem 4 is proven.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, W.D., Kang, S.G. & Kim, Y. Objective Bayesian testing for the linear combinations of normal means. Stat Papers 60, 147–172 (2019). https://doi.org/10.1007/s00362-016-0831-2

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-016-0831-2

Keywords

Navigation