Skip to main content
Log in

A Dual Consistent Finite Difference Method with Narrow Stencil Second Derivative Operators

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

We study the numerical solutions of time-dependent systems of partial differential equations, focusing on the implementation of boundary conditions. The numerical method considered is a finite difference scheme constructed by high order summation by parts operators, combined with a boundary procedure using penalties (SBP–SAT). Recently it was shown that SBP–SAT finite difference methods can yield superconvergent functional output if the boundary conditions are imposed such that the discretization is dual consistent. We generalize these results so that they include a broader range of boundary conditions and penalty parameters. The results are also generalized to hold for narrow-stencil second derivative operators. The derivations are supported by numerical experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Berg, J., Nordström, J.: Superconvergent functional output for time-dependent problems using finite differences on summation-by-parts form. J. Comput. Phys. 231(20), 6846–6860 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  2. Berg, J., Nordström, J.: On the impact of boundary conditions on dual consistent finite difference discretizations. J. Comput. Phys. 236, 41–55 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  3. Berg, J., Nordström, J.: Duality based boundary conditions and dual consistent finite difference discretizations of the Navier–Stokes and Euler equations. J. Comput. Phys. 259, 135–153 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  4. Carpenter, M.H., Nordström, J., Gottlieb, D.: A stable and conservative interface treatment of arbitrary spatial accuracy. J. Comput. Phys. 148(2), 341–365 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  5. Eriksson, S., Nordström, J.: Analysis of the order of accuracy for node-centered finite volume schemes. Appl. Numer. Math. 59(10), 2659–2676 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Fernández, D.C.D.R., Hicken, J.E., Zingg, D.W.: Review of summation-by-parts operators with simultaneous approximation terms for the numerical solution of partial differential equations. Comput. Fluids 95, 171–196 (2014)

    Article  MathSciNet  Google Scholar 

  7. Gustafsson, B., Kreiss, H.O., Oliger, J.: Time-Dependent Problems and Difference Methods. Wiley, New York (2013)

    Book  MATH  Google Scholar 

  8. Hicken, J.E.: Output error estimation for summation-by-parts finite-difference schemes. J. Comput. Phys. 231(9), 3828–3848 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  9. Hicken, J.E., Zingg, D.W.: Superconvergent functional estimates from summation-by-parts finite-difference discretizations. SIAM J. Sci. Comput. 33(2), 893–922 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Hicken, J.E., Zingg, D.W.: Summation-by-parts operators and high-order quadrature. J. Comput. Appl. Math. 237(1), 111–125 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  11. Kreiss, H.O., Lorenz, J.: Initial-Boundary Value Problems and the Navier–Stokes Equations. Academic Press, New York (1989)

    MATH  Google Scholar 

  12. Mattsson, K.: Summation by parts operators for finite difference approximations of second-derivatives with variable coefficients. J. Sci. Comput. 51(3), 650–682 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Mattsson, K., Nordström, J.: Summation by parts operators for finite difference approximations of second derivatives. J. Comput. Phys. 199(2), 503–540 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  14. Nordström, J., Eriksson, S., Eliasson, P.: Weak and strong wall boundary procedures and convergence to steady-state of the Navier–Stokes equations. J. Comput. Phys. 231(14), 4867–4884 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  15. Nordström, J., Svärd, M.: Well-posed boundary conditions for the Navier–Stokes equations. SIAM J. Numer. Anal. 43(3), 1231–1255 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  16. Quarteroni, A., Sacco, R., Saleri, F.: Numerical Mathematics. Springer, Berlin (2000)

    MATH  Google Scholar 

  17. Strand, B.: Summation by parts for finite difference approximation for d/dx. J. Comput. Phys. 110(1), 47–67 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  18. Svärd, M., Nordström, J.: On the order of accuracy for difference approximations of initial-boundary value problems. J. Comput. Phys. 218(1), 333–352 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  19. Svärd, M., Nordström, J.: Review of summation-by-parts schemes for initial-boundary-value problems. J. Comput. Phys. 268, 17–38 (2014)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The author would like to sincerely thank the anonymous referees for their valuable comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sofia Eriksson.

Appendices

Appendix A: Reformulation of the First Order Form Discretization

We derive the scheme (33) with penalty parameters (35), using the hyperbolic results.

Step 1: Consider the problem (31), which is a first order system. We represent the solution \(\bar{\mathcal {U}}\) by a discrete solution vector \(\bar{U}=[\bar{U}_0^T,\bar{U}_1^T,\ldots ,\bar{U}_N^T]^T\), where \(\bar{U}_i(t)\approx \bar{\mathcal {U}}(x_i,t)\) and discretize (31) exactly as was done in (11) for the hyperbolic case, that is as

$$\begin{aligned} \begin{aligned} (I_N\otimes \bar{\mathcal {I}})\bar{U}_t+(I_N\otimes \bar{\mathcal {R}})\bar{U}+(D_1\otimes \bar{\mathcal {A}})\bar{U}=&\,\bar{F}+(H^{-1}e_{0}\otimes \bar{{\varSigma }}_{0}) (\bar{\mathcal {B}}_L{\bar{U}}_{0}-g_{{L}})\\&+(H^{-1}e_N \otimes \bar{{\varSigma }}_N)(\bar{\mathcal {B}}_R{\bar{U}}_N -g_{{R}}). \end{aligned} \end{aligned}$$
(66)

As proposed in Theorem 1, we let \(\bar{{\varSigma }}_{0}=-\bar{Z}_+\bar{{\varDelta }}^{ }_+\bar{P}_{L}^{-1}\) and \(\bar{{\varSigma }}_N=\bar{Z}_-\bar{{\varDelta }}^{ }_-\bar{P}_{R}^{-1}\).

Step 2: We discretize (30) directly by approximating \(\mathcal {U}\) by \(U\) and \(\mathcal {U}_x\) by \(\widehat{W}\). We obtain

$$\begin{aligned} \begin{aligned} U_t +(D_1\otimes \mathcal {A})U-(D_1\otimes \mathcal {E})\widehat{W}=&\,F+\left( H^{-1}e_{0}\otimes \sigma _0\right) \left( \mathcal {H}_L{U}_{0}+\mathcal {G}_L\widehat{W}_{0}-g_{{L}}\right) \\&+\left( H^{-1}e_N \otimes \sigma _N\right) \left( \mathcal {H}_R{U}_N+\mathcal {G}_R\widehat{W}_N -g_{{R}}\right) , \end{aligned} \end{aligned}$$
(67a)
$$\begin{aligned} \begin{aligned} (I_N\otimes \mathcal {E})\widehat{W} -(D_1\otimes \mathcal {E})U=&\,\left( H^{-1}e_{0}\otimes \tau _0\right) \left( \mathcal {H}_L{U}_{0}+\mathcal {G}_L\widehat{W}_{0}-g_{{L}}\right) \\&+\left( H^{-1}e_N \otimes \tau _N\right) \left( \mathcal {H}_R{U}_N+\mathcal {G}_R\widehat{W}_N -g_{{R}}\right) . \end{aligned} \end{aligned}$$
(67b)

If \( \bar{{\varSigma }}_{0}=[\sigma _0^T,\tau _0^T]^T\) and \(\bar{{\varSigma }}_N=[\sigma _N^T,\tau _N^T]^T\), then (67) is a permutation of (66).

Step 3: The scheme in (67) is a system of differential algebraic equations, so we would like to cancel the variable \(\widehat{W}\) and get a system of ordinary differential equations instead. Multiplying (67b) by \(\bar{D}=(D_1\otimes I_n)\) and adding the result to (67a), yields

$$\begin{aligned} \begin{aligned} U_t +(D_1\otimes \mathcal {A})U-(D_1^2\otimes \mathcal {E})U=&\,F+\left( H^{-1}e_{0}\otimes \sigma _0+D_1H^{-1}e_{0}\otimes \tau _0\right) \widehat{\chi }_{0}\\&+\left( H^{-1}e_N \otimes \sigma _N+D_1H^{-1}e_N \otimes \tau _N\right) \widehat{\chi }_N, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \widehat{\chi }_{0}=\mathcal {H}_L{U}_{0}+\mathcal {G}_L\widehat{W}_{0}-g_{{L}},\qquad \quad \widehat{\chi }_N=\mathcal {H}_R{U}_N+\mathcal {G}_R\widehat{W}_N -g_{{R}}. \end{aligned}$$
(68)

Next, using the properties in (12), together with the fact that \(H\) is diagonal, we compute

$$\begin{aligned} D_1H^{-1}e_0 =H^{-1}\left( -\widehat{q}I_N-D_1^T\right) e_0,&D_1H^{-1}e_N=H^{-1}\left( \widehat{q}I_N-D_1^T\right) e_N, \end{aligned}$$

where \(\widehat{q}\) is the scalar \(\widehat{q}=e_0^TH^{-1}e_0=e_N^TH^{-1}e_N\) given in (37). This yields

$$\begin{aligned} \begin{aligned} U_t +(D_1\otimes \mathcal {A})U-\left( D_1 ^2\otimes \mathcal {E}\right) U=&\,F+\bar{H}^{-1}\left( e_{0}\otimes (\sigma _0-\widehat{q}\tau _0)-D_1^Te_{0}\otimes \tau _0\right) \widehat{\chi }_{0}\\&+\bar{H}^{-1}\left( e_N \otimes (\sigma _N+\widehat{q}\tau _N) -D_1^Te_N \otimes \tau _N\right) \widehat{\chi }_N, \end{aligned} \end{aligned}$$
(69)

where \(\bar{H}=(H\otimes I_n)\). However, the boundary condition deviations \(\widehat{\chi }_0\) and \(\widehat{\chi }_N\) still contain \(\widehat{W}\), so we multiply (67b) by \((e_0^T\otimes I_n)\) and \((e_N^T\otimes I_n)\), respectively, to get

$$\begin{aligned} \mathcal {E}\widehat{W}_0 - \mathcal {E}(\bar{D}U)_0&=\widehat{q}\tau _0\widehat{\chi }_0,&\mathcal {E}\widehat{W}_N- \mathcal {E}(\bar{D}U)_N&=\widehat{q}\tau _N\widehat{\chi }_N. \end{aligned}$$
(70)

Next, we need boundary condition deviations without \(\widehat{W}\), and define

$$\begin{aligned} \widehat{\xi }_{0}&=\mathcal {H}_L{U}_0 +\mathcal {G}_L(\bar{D}U)_0 -g_{{L}},&\widehat{\xi }_N&= \mathcal {H}_R{U}_N+\mathcal {G}_R(\bar{D}U)_N -g_{{R}}. \end{aligned}$$

Recall that \(\mathcal {G}_{L,R}=\mathcal {K}_{L,R}\mathcal {E}\). Using (70), we can now relate \(\widehat{\xi }_{0,N}\) above to \(\widehat{\chi }_{0,N}\) in (68) as

$$\begin{aligned} \widehat{\xi }_{0}&=(I_{m_+}-\widehat{q}\mathcal {K}_L\tau _0)\widehat{\chi }_0 ,&\widehat{\xi }_N&= (I_{m_-}-\widehat{q}\mathcal {K}_R\tau _N)\widehat{\chi }_N, \end{aligned}$$
(71)

where \(I_{m_+}\) and \(I_{m_-}\) are identity matrices of sizes corresponding to the number of positive (\({m_+}\)) and negative (\(m_-\)) eigenvalues of \(\bar{\mathcal {A}}\), respectively. Inserting \(\widehat{\chi }_{0,N}\) from (71) into (69) allows us to finally write the scheme without any \(\widehat{W}\) terms and we obtain (33), with

$$\begin{aligned} \begin{aligned} \widehat{\mu }_{0}&=(\sigma _0-\widehat{q}\tau _0) (I_{m_+}-\widehat{q}\mathcal {K}_L\tau _0)^{-1},\widehat{\nu }_{0}=-\tau _0(I_{m_+}-\widehat{q}\mathcal {K}_L\tau _0)^{-1}, \\ \widehat{\mu }_N&=(\sigma _N+\widehat{q}\tau _N) (I_{m_-}-\widehat{q}\mathcal {K}_R\tau _N)^{-1},\widehat{\nu }_N=-\tau _N(I_{m_-}-\widehat{q}\mathcal {K}_R\tau _N)^{-1}.\qquad \end{aligned} \end{aligned}$$
(72)

From Step 1 and 2 we know that

$$\begin{aligned} \left[ \begin{array}{c}\sigma _0\\ \tau _0\end{array}\right] =-\left[ \begin{array}{c}\bar{Z}_1\bar{{\varDelta }}^{ }_+\bar{P}_{L}^{-1}\\ \bar{Z}_2\bar{{\varDelta }}^{ }_+\bar{P}_{L}^{-1}\end{array}\right] ,&\left[ \begin{array}{c}\sigma _N\\ \tau _N\end{array}\right] =\left[ \begin{array}{c}\bar{Z}_3\bar{{\varDelta }}^{ }_-\bar{P}_{R}^{-1}\\ \bar{Z}_4\bar{{\varDelta }}^{ }_-\bar{P}_{R}^{-1}\end{array}\right] , \end{aligned}$$

where \(\bar{Z}_{1,2,3,4}\) are given in (36). Inserting the above relation into (72), we obtain the penalty parameters presented in (35).

Appendix B: Validity of the Derivations and Penalty Parameters in Appendix A

The following Lemma will prove useful:

Lemma 1

(Determinant theorem) For any matrices A and B of size \(m \times n\) and \(n \times m\), respectively, \(\det (I_{ m }+AB)=\det (I_{ n }+BA)\) holds. This lemma is a generalization of the “matrix determinant lemma” and sometimes referred to as “Sylvester’s determinant theorem”.

Proof

Consider the product of block matrices below:

$$\begin{aligned} {\begin{pmatrix} I_m&{}0\\ B&{}I_n \end{pmatrix}} {\begin{pmatrix} I_m+AB&{}A\\ 0&{}I_n \end{pmatrix}} {\begin{pmatrix} I_m&{}0\\ -B&{}I_n \end{pmatrix}}= {\begin{pmatrix} I_m&{}A\\ 0&{}I_n+BA \end{pmatrix}}. \end{aligned}$$

Using the multiplicativity of determinants, the determinant rule for block triangular matrices and the fact that \(\det (I_m)=\det (I_n)=1\), we see that the determinant of the left hand side is \( \det (I_m+AB)\) and that the determinant of the right hand side is \( \det (I_n + BA)\). \(\square \)

When (67) in “Appendix A” is rewritten such that all dependence of \(\widehat{W}\) is removed, we rely on the assumption that we can extract \((I_N\otimes \mathcal {E})\widehat{W}\) from (67b) and insert it into (67a). Intuitively we expect this to be possible, since (67) is in fact (although indirectly) a consistent approximation of (30). To investigate this more carefully, we multiply (67b) by \((H\otimes I_n)\) and move all \(\widehat{W}\) dependent parts to the left hand side (recall that \(\mathcal {G}_{L,R}=\mathcal {K}_{L,R}\mathcal {E}\)). This yields

$$\begin{aligned} \left( (H\otimes I_n)-(E_{0}\otimes \tau _0\mathcal {K}_L)-(E_N \otimes \tau _N\mathcal {K}_R) \right) (I_N\otimes \mathcal {E})\widehat{W}=&\,(Q\otimes \mathcal {E})U\\&+(e_{0}\otimes \tau _0)(\mathcal {H}_L{U}_{0}-g_{{L}})\\&+(e_N \otimes \tau _N)(\mathcal {H}_R{U}_N -g_{{R}}). \end{aligned}$$

We see that we can solve for \((I_N\otimes \mathcal {E})\widehat{W}\) if the matrices \(H_{0,0}I_n-\tau _0\mathcal {K}_L\) and \(H_{N,N}I_n-\tau _N\mathcal {K}_R\) are non-singular. From “Appendix A” we know that

$$\begin{aligned} \tau _0=-\bar{Z}_2\bar{{\varDelta }}^{ }_+\bar{P}_{L}^{-1},&\tau _N=\bar{Z}_4\bar{{\varDelta }}^{ }_-\bar{P}_{R}^{-1},\end{aligned}$$

that is, we need \(I_n+\widehat{q}\bar{Z}_2\bar{{\varDelta }}^{ }_+\bar{P}_{L}^{-1} \mathcal {K}_L\) and \(I_n-\widehat{q}\bar{Z}_4\bar{{\varDelta }}^{ }_-\bar{P}_{R}^{-1} \mathcal {K}_R\) to be non-singular (note that \(\widehat{q}=1/H_{0,0}=1/H_{N,N}\) for diagonal matrices \(H\)). According to Lemma 1 above, we have

$$\begin{aligned} \det \left( I_n+\widehat{q}\bar{Z}_2\bar{{\varDelta }}^{ }_+\bar{P}_{L}^{-1} \mathcal {K}_L\right)&=\det \left( I_{m_+}+\widehat{q}\bar{P}_{L}^{-1} \mathcal {K}_L\bar{Z}_2\bar{{\varDelta }}^{}_+\right) =\det \left( \bar{P}_{L}^{-1}\right) \det \left( \widehat{{\varXi }}_{L}\right) \\ \det \left( I_n-\widehat{q}\bar{Z}_4\bar{{\varDelta }}^{ }_-\bar{P}_{R}^{-1} \mathcal {K}_R\right)&=\det \left( I_{m_-}-\widehat{q}\bar{P}_{R}^{-1} \mathcal {K}_R\bar{Z}_4\bar{{\varDelta }}^{ }_-\right) =\det \left( \bar{P}_{R}^{-1} \right) \det \left( \widehat{{\varXi }}_{R}\right) . \end{aligned}$$

That is, we can solve for \((I_N\otimes \mathcal {E})\widehat{W}\) in (67b) if the matrices \(\widehat{{\varXi }}_{L,R}\) are non-singular, where \(\widehat{{\varXi }}_{L,R}\) are nothing else than the matrices that shows up in the penalty parameters in (35). So, we are thus interested in the regularity of the matrices

$$\begin{aligned} \widehat{{\varXi }}_L=\bar{P}_{L}+\widehat{q}\mathcal {K}_L\bar{Z}_2\bar{{\varDelta }}^{ }_+ ,&\widehat{{\varXi }}_R=\bar{P}_{R}-\widehat{q}\mathcal {K}_R\bar{Z}_4\bar{{\varDelta }}^{ }_-, \end{aligned}$$

which are inverted in (35)—or if \(\widehat{q}\) is replaced by \(q\) we consider \({\varXi }_{L,R}\) from (43). Below we show that \(\widehat{{\varXi }}_L\) is non-singular for well-posed problems. First, using (6), (32) and (36) we obtain

$$\begin{aligned} \left[ \begin{array}{cc}-\mathcal {E}&0_{n,n} \end{array}\right] \bar{Z}^{-T}=\left[ \begin{array}{ccc}\bar{Z}_2\bar{{\varDelta }}_+&0_{n,m_0}&\bar{Z}_4\bar{{\varDelta }}_-\end{array}\right] \end{aligned}$$

and realize that the matrices \(\bar{Z}_2\) and \(\bar{Z}_4\) scales with \(\mathcal {E}\) as \(\bar{Z}_2=\mathcal {E}\widetilde{Z_2}\) and \(\bar{Z}_4=\mathcal {E}\widetilde{Z_4}\), where

$$\begin{aligned} \widetilde{Z_2}=\left[ \begin{array}{cc}I_{n}&0_{n,n} \end{array}\right] \bar{Z}^{-T}\left[ \begin{array}{c}-I_{m_+}\\ 0_{(m-m_+),m_+} \end{array}\right] \bar{{\varDelta }}_+^{-1},&\widetilde{Z_4}=\left[ \begin{array}{cc}I_n&0_{n,n} \end{array}\right] \bar{Z}^{-T}\left[ \begin{array}{c}0_{(m-m_-),m_-}I_{m_-} \end{array}\right] \bar{{\varDelta }}_-^{-1}. \end{aligned}$$

Secondly, using (10), (32) and (36) leads to \(\mathcal {G}_L=\bar{P}_{L}(\bar{Z}_2^T+\bar{R}_L\bar{Z}_4^T)\) and we can rewrite \(\widehat{{\varXi }}_L\) as

$$\begin{aligned} \widehat{{\varXi }}_L&=\bar{P}_{L}\left( I_{m_+}+\widehat{q}\left( \widetilde{Z_2}^T+\bar{R}_L\widetilde{Z_4}^T\right) \mathcal {E}\widetilde{Z_2}\bar{{\varDelta }}^{ }_+\right) , \end{aligned}$$

where we have used that \(\mathcal {G}_L=\mathcal {K}_L\mathcal {E}\). Now Lemma 1 yields \(\det (\widehat{{\varXi }}_L) =\det (\bar{P}_{L}) \det \left( {\varUpsilon }\right) \), where \({\varUpsilon }\equiv I_{n}+\widehat{q}\mathcal {E}^{1/2}\widetilde{Z_2}\bar{{\varDelta }}^{}_+\big (\widetilde{Z_2}^T+\bar{R}_L\widetilde{Z_4}^T\big )\mathcal {E}^{1/2}\) and where we by \(\mathcal {E}^{1/2}\) refer to the principal square root of \(\mathcal {E}\). The permutation matrix \(\bar{P}_{L}\) is invertible but \({\varUpsilon }\) must be checked. Thus we compute

$$\begin{aligned} {\varUpsilon }{\varUpsilon }^T =&\, I_{n}+\widehat{q}^2 \mathcal {E}^{1/2}\widetilde{Z_2}\bar{{\varDelta }}^{}_+\left( \widetilde{Z_2}^T+\bar{R}_L\widetilde{Z_4}^T\right) \mathcal {E}\left( \widetilde{Z_2}+\widetilde{Z_4}\bar{R}_L^T\right) \bar{{\varDelta }}^{}_+ \widetilde{Z_2}^T \mathcal {E}^{1/2}\\&+\widehat{q}\mathcal {E}^{1/2}\left( \left( \widetilde{Z_2}+\widetilde{Z_4}\bar{R}_L^T\right) \bar{{\varDelta }}^{ }_+ \widetilde{Z_2}^T +\widetilde{Z_2}\bar{{\varDelta }}^{ }_+\left( \widetilde{Z_2}^T+\bar{R}_L\widetilde{Z_4}^T\right) \right) \mathcal {E}^{1/2}. \end{aligned}$$

Next, thanks to the condition for well-posedness, \(\bar{\mathcal {C}}_L=\bar{{\varDelta }}^{ }_-+\bar{R}_L^T\bar{{\varDelta }}^{ }_+\bar{R}_L\le 0\), we obtain

$$\begin{aligned} 0&\le \left( \widetilde{Z_2}^T+\bar{R}_L\widetilde{Z_4}^T\right) ^T\bar{{\varDelta }}_+^{}\left( \widetilde{Z_2}^T+\bar{R}_L\widetilde{Z_4}^T\right) \\&=\widetilde{Z_2}\bar{{\varDelta }}_+^{}\widetilde{Z_2}^T+\widetilde{Z_4}\bar{R}_L^T\bar{{\varDelta }}_+^{}\widetilde{Z_2}^T+\widetilde{Z_2}\bar{{\varDelta }}_+^{}\bar{R}_L\widetilde{Z_4}^T+\widetilde{Z_4}\bar{R}_L^T\bar{{\varDelta }}_+^{}\bar{R}_L\widetilde{Z_4}^T\\&\le \widetilde{Z_2}\bar{{\varDelta }}_+^{}\widetilde{Z_2}^T+\widetilde{Z_4}\bar{R}_L^T\bar{{\varDelta }}_+^{}\widetilde{Z_2}^T+\widetilde{Z_2}\bar{{\varDelta }}_+^{}\bar{R}_L\widetilde{Z_4}^T-\widetilde{Z_4}\bar{{\varDelta }}_-^{}\widetilde{Z_4}^T. \end{aligned}$$

Inserting this into \({\varUpsilon }{\varUpsilon }^T\) above, gives (recall that \(\widehat{q}\) is positive)

$$\begin{aligned} {\varUpsilon }{\varUpsilon }^T \ge&\, I_{n}+\widehat{q}^2 \mathcal {E}^{1/2}\widetilde{Z_2}\bar{{\varDelta }}^{}_+\left( \widetilde{Z_2}^T+\bar{R}_L\widetilde{Z_4}^T\right) \mathcal {E}\left( \widetilde{Z_2}+\widetilde{Z_4}\bar{R}_L^T\right) \bar{{\varDelta }}^{}_+ \widetilde{Z_2}^T \mathcal {E}^{1/2}\\&+\widehat{q}\mathcal {E}^{1/2}\left( \widetilde{Z_2}\bar{{\varDelta }}^{ }_+\widetilde{Z_2}^T+\widetilde{Z_4}\bar{{\varDelta }}_-^{}\widetilde{Z_4}^T\right) \mathcal {E}^{1/2}. \end{aligned}$$

We now let \(\mathcal {E}=X{\varLambda }X^T\) be the eigendecomposition of \(\mathcal {E}\), with—for simplicity—the eigenvalues sorted as \({\varLambda }=\text {diag}({\varLambda }_+, {\varLambda }_0)\), where \({\varLambda }_+>0\) and \({\varLambda }_0=0\). Furthermore, we denote

$$\begin{aligned} X^T\left( \widetilde{Z_2}\bar{{\varDelta }}^{ }_+\widetilde{Z_2}^T+\widetilde{Z_4}\bar{{\varDelta }}_-^{}\widetilde{Z_4}^T\right) X=\left[ \begin{array}{cc}{\varTheta }_{1}&{}{\varTheta }_{3}\\ {\varTheta }_{2}&{}{\varTheta }_{4}\end{array}\right] , \end{aligned}$$

where \({\varTheta }_{1}\) and \({\varTheta }_{4}\) have the same sizes as \({\varLambda }_+\) and \({\varLambda }_0\), respectively. Using (32), (36) and (7) leads to \(\bar{Z}_2\bar{{\varDelta }}^{ }_+\bar{Z}_2^T+\bar{Z}_4\bar{{\varDelta }}^{ }_-\bar{Z}_4^T =0\), that is

$$\begin{aligned} 0&=\bar{Z}_2\bar{{\varDelta }}^{ }_+\bar{Z}_2^T+\bar{Z}_4\bar{{\varDelta }}^{ }_-\bar{Z}_4^T =\mathcal {E}\left( \widetilde{Z_2}\bar{{\varDelta }}^{ }_+\widetilde{Z_2}^T+\widetilde{Z_4}\bar{{\varDelta }}_-^{}\widetilde{Z_4}^T\right) \mathcal {E}=X\left[ \begin{array}{cc}{\varLambda }_+{\varTheta }_{1}{\varLambda }_+&{}0\\ 0&{}0\end{array}\right] X^T, \end{aligned}$$

which means that \({\varTheta }_{1}=0\) must hold. This in turn leads to

$$\begin{aligned} \mathcal {E}^{1/2}\left( \widetilde{Z_2}\bar{{\varDelta }}^{ }_+\widetilde{Z_2}^T+\widetilde{Z_4}\bar{{\varDelta }}_-^{}\widetilde{Z_4}^T\right) \mathcal {E}^{1/2}&=X\left[ \begin{array}{cc}{\varLambda }_+^{1/2}{\varTheta }_{1}{\varLambda }_+^{1/2}&{}0\\ 0&{}0\end{array}\right] X^T=0, \end{aligned}$$

where we have used that \(\mathcal {E}^{1/2}=X{\varLambda }^{1/2}X^T\). Thus \({\varUpsilon }{\varUpsilon }^T\) is non-singular, since

$$\begin{aligned} {\varUpsilon }{\varUpsilon }^T&\ge I_{n}+\widehat{q}^2 \mathcal {E}^{1/2}\widetilde{Z_2}\bar{{\varDelta }}^{}_+\left( \widetilde{Z_2}^T+\bar{R}_L\widetilde{Z_4}^T\right) \mathcal {E}\left( \widetilde{Z_2}+\widetilde{Z_4}\bar{R}_L^T\right) \bar{{\varDelta }}^{}_+ \widetilde{Z_2}^T \mathcal {E}^{1/2}\ge I_{n}>0. \end{aligned}$$

It follows that \({\varUpsilon }\) is non-singular, since \({\text {rank}}({\varUpsilon }{\varUpsilon }^T) = {\text {rank}}({\varUpsilon }) \), and consequently so is \(\widehat{{\varXi }}_L\), since \(\det (\widehat{{\varXi }}_L) =\det (\bar{P}_{L}) \det \left( {\varUpsilon }\right) \). The same derivations can be repeated for the right boundary.

Appendix C: Proof of Proposition 1 and Examples of \(q\)

In Proposition 1 we claim that the inverse of \(\widetilde{A}_{{S}}= A_{{S}}+\delta E_0\) has the structure \(\widetilde{A}_{{S}}^{-1}=J/\delta +K_0\) and that the corners of \(\widetilde{M}^{-1}=S\widetilde{A}_{{S}}^{-1}S^T\) are independent of \(\delta \). We prove this below.

Proof

First we make sure that \(\widetilde{A}_{{S}}\) is non-singular. By numerically investigating the eigenvalue of \(\widetilde{A}_{{S}}\) which is closest to zero, we see that (for all operators in this paper) it scales almost as \(c(\delta )/N\), where \(c(\delta )\) is nearly independent of the operator and where \(c(\delta )=0\) only if \(\delta =0\). We now show that the inverse of \(\widetilde{A}_{{S}}\) is \(J/\delta +K_0\). We denote the parts of \( A_{{S}}\), \( \widetilde{A}_{{S}}\), \(K_0\) and J

$$\begin{aligned} A_{{S}}=\left[ \begin{array}{cc}a&{}\mathbf {a}^T\\ \mathbf {a}&{}\bar{A}\end{array}\right] ,&\widetilde{A}_{{S}}=\left[ \begin{array}{cc}a+\delta &{}\mathbf {a}^T\\ \mathbf {a}&{}\bar{A}\end{array}\right] ,&K_0=\left[ \begin{array}{cc}{0}&{}\mathbf {0}^T\\ \mathbf {0}&{}\bar{A}^{-1}\end{array}\right] ,&J=\left[ \begin{array}{cc}1 &{}\mathbf {1}^T\\ \mathbf {1}&{}\bar{J}\end{array}\right] , \end{aligned}$$

where \(\mathbf {a}\) and \(\mathbf {1}=[1, 1, \ldots , 1]^T\) are vectors and \(\bar{A}\) and \(\bar{J}\) are matrices of size \(N\times N\). Since \(A_{{S}}\) consists of consistent difference operators, it yields zero when operating on constants. Therefore, \(A_{{S}}J=0\) (because J is an all-ones matrix) and \(\mathbf {a}+\bar{A}\mathbf {1}=\mathbf {0}\). Note that the relation \(\mathbf {a}+\bar{A}\mathbf {1}=\mathbf {0}\) leads to \(\left[ \begin{array}{cc} \mathbf {a}&\bar{A}\end{array}\right] =\bar{A}B\), where \(B=\left[ \begin{array}{cc} -\mathbf {1}&\bar{I}\end{array}\right] \) is an \(N\times (N+1)\) matrix of rank N, in which case it holds that \( {\text {rank}}(\bar{A}B) = {\text {rank}}(\bar{A})\). Moreover, since \(\widetilde{A}_{{S}}\) is non-singular, \(\left[ \begin{array}{cc} \mathbf {a}&\bar{A}\end{array}\right] \) must have full rank N. Hence \( {\text {rank}}(\bar{A})= {\text {rank}}(\bar{A}B) =N\), i.e., \(\bar{A}\) has full rank N and is invertible. Next, due to the structure of \(K_0\), we know that \(E_0K_0=0\). Thus we have

$$\begin{aligned} (A_{{S}}+\delta E_0)(J/\delta +K_0) =A_{{S}}K_0+ E_0J =\left[ \begin{array}{cc}1 &{}\mathbf {a}^T\bar{A}^{-1}+\mathbf {1}^T\\ \mathbf {0}&{}\bar{I}\end{array}\right] =I. \end{aligned}$$

In the last step we have used that \(\mathbf {a}^T+\mathbf {1}^T\bar{A}^T=\mathbf {0}^T\) and that \(\bar{A}\) is symmetric.

The first and the last row of the matrix S are consistent difference stencils. We can thus write \(S=[\mathbf {s}_0, \times , \mathbf {s}_N]^T\), where the vectors \(\mathbf {s}_{0, N}\) have the property \(\mathbf {s}_{0, N}^TJ=0\). The interior rows of S are marked by a \(\times \) because they are not uniquely defined. We compute

$$\begin{aligned} \widetilde{M}^{-1}&=S\widetilde{A}_{{S}}^{-1}S^T =S\left( J/\delta +K_0\right) S^T =\left[ \begin{array}{ccc}\mathbf {s}_0^TK_0\mathbf {s}_0 &{}\times &{} \mathbf {s}_0^TK_0\mathbf {s}_N\\ \times &{} \times &{} \times \\ \mathbf {s}_N^TK_0\mathbf {s}_0 &{}\times &{}\mathbf {s}_N^T K_0\mathbf {s}_N \end{array}\right] . \end{aligned}$$

We see that the corner elements of \(\widetilde{M}^{-1}\) are independent of \(\delta \). We conclude that if S is defined such that it is non-singular, the constants in (42) can be computed using (60). \(\square \)

As an example, consider the narrow (2, 0) order operator in Table 1, specified by \(D_2\) below and associated with the following matrices \(H\) and S

$$\begin{aligned} \begin{aligned} D_2&=\frac{1}{h^2}\left[ \begin{array}{ccccc} 0&{}0\\ 1&{}-2&{}1\\ &{}\ddots &{}\ddots &{}\ddots \\ &{}&{}1&{}-2&{}1\\ &{}&{}&{}0&{}0 \end{array}\right] ,&H=h\left[ \begin{array}{ccccc} 1/2\\ &{}1\\ &{}&{}\ddots \\ &{}&{}&{}1\\ &{}&{}&{}&{}1/2\end{array}\right] ,\\{} S&=\frac{1}{h}\left[ \begin{array}{ccccc} -1&{}1\\ \times &{}\times &{}\times &{}\times &{}\times \\ \vdots &{}\vdots &{}\vdots &{}\vdots &{}\vdots \\ \times &{}\times &{}\times &{}\times &{}\times \\ &{}&{}&{}-1&{}1\end{array}\right] . \end{aligned} \end{aligned}$$
(73)

Using (73) and (39) we obtain \(A_{{S}}\) such that we can compute \(K_0\) and \(\widetilde{M}^{-1}\) as

$$\begin{aligned} K_0=h\left[ \begin{array}{ccccc}0 &{}0 &{}\ldots &{}0&{}0 \\ 0 &{}1 &{}\ldots &{}1&{}1\\ \vdots &{}\vdots &{}\ddots &{}\vdots &{}\vdots \\ 0 &{}1 &{}\ldots &{}N-1&{}N-1 \\ 0 &{}1 &{}\ldots &{}N-1&{}N\end{array}\right] ,&\widetilde{M}^{-1} =\frac{1}{h}\left[ \begin{array}{ccccc}1 &{}\times &{}\ldots &{}\times &{}0 \\ \times &{}\times &{}\ldots &{}\times &{}\times \\ \vdots &{}\vdots &{} &{}\vdots &{}\vdots \\ \times &{}\times &{}\ldots &{}\times &{}\times \\ 0 &{}\times &{}\ldots &{}\times &{}1\end{array}\right] \end{aligned}$$

In this case we get \(q_0=q_N=1/h\) and \(q_c=0\), such that \(q=1/h\).

In addition to the operator discussed above, we use the diagonal-norm operators in [13]. For the higher order accurate operators found in [13], \(q\) varies with N. For example, for the narrow (4, 2) order accurate operator, we have

$$\begin{aligned} \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} N&{}q_0h&{}q_ch&{}qh\\ \hline 8&{} 3.986350339808304 &{} 0.000041141179445 &{} 3.986391480987749\\ 9&{} 3.986350339313381 &{} 0.000002953803786 &{} 3.986353293117168\\ 10&{} 3.986350339310830 &{} 0.000000212073570 &{} 3.986350551384400\\ 11&{} 3.986350339310817 &{} 0.000000015226197 &{} 3.986350354537014\\ 12&{} 3.986350339310817 &{}0.000000001093192 &{} 3.986350340404008\\ \end{array} \end{aligned}$$

Since the values do not differ so much, it is practical to use the largest value, the one for \(N=8\), regardless of the number of grid points.

Appendix D: Time-Dependent Numerical Examples

For simplicity we mainly consider stationary numerical examples. Below we give a couple of examples confirming the superconvergence also for time-dependent problems.

1.1 The Heat Equation with Dirichlet Boundary Conditions

We consider the heat equation. We solve \(\mathcal {U}_t=\varepsilon \mathcal {U}_{xx}+\mathcal {F}(x,t)\) with \(\varepsilon =0.01\) and the exact solution \(\mathcal {U}(x,t)=\cos (30x)+\sin (20x)\cos (10t)+\sin (35t)\). For the time propagation the classical 4th order accurate Runge–Kutta scheme is used, with sufficiently small time steps, \({\varDelta }t=10^{-4}\), such that the spatial errors dominate. In Fig. 13 the errors obtained using the narrow (6, 3) order scheme are shown as a function of time.

Fig. 13
figure 13

Errors when solving the heat equation with Dirichlet boundary conditions, using the narrow (6, 3) order scheme. a Solution error \(\Vert \texttt {e}\Vert _{{}_H}\). b Functional error \(|\texttt {E}|\) with \(\mathcal {G}(x)=1\)

The corresponding spatial order of convergence (at time \(t=1\)) is shown in Table 2. The simulations confirm the steady results, namely that both \(\omega =2\varepsilon \) and \(\omega =q\varepsilon \) give superconvergent functionals but that choosing the factorization parameter as \(\omega \sim \varepsilon /h\) improves the solution significantly compared to when using the eigendecomposition.

1.2 The Heat Equation with Neumann Boundary Conditions

We solve \(\mathcal {U}_t=\varepsilon \mathcal {U}_{xx}+\mathcal {F}(x,t)\) again, but this time with Neumann boundary conditions, and the penalty parameters are now given by (65) with \(a=0\), \(\varepsilon =0.01\), \(\alpha _{{}_{L,R}}=0\) and \(\beta _{{}_{L,R}}=1\). In contrast to when having Dirichlet boundary conditions, the spectral radius \(\rho \) does not depend so strongly on \(\omega \) and therefore we can let \(\omega \rightarrow \infty \) (we can use \(\omega =q\varepsilon \) here too, it gives the same convergence rates as \(\omega =\infty \)). In Table 3 we show the errors and convergence orders (at time \(t=1\)) for the same setup as in the previous section, that is when solving using the 4th order Runge–Kutta scheme with \({\varDelta }t=10^{-4}\) and having the exact solution \(\mathcal {U}(x,t)=\cos (30x)+\sin (20x)\cos (10t)+\sin (35t)\) and the weight function \(\mathcal {G}=1\). We note that the convergence rates behaves similarly to the Dirichlet case.

Table 2 The errors and convergence rates at \(t=1\) for the narrow (6,3) order scheme, when using Dirichlet boundary conditions
Table 3 The errors and convergence rates at \(t=1\) for the narrow (6,3) order scheme, when using Neumann boundary conditions

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eriksson, S. A Dual Consistent Finite Difference Method with Narrow Stencil Second Derivative Operators. J Sci Comput 75, 906–940 (2018). https://doi.org/10.1007/s10915-017-0569-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-017-0569-6

Keywords

Mathematics Subject Classification

Navigation