Skip to main content

Spectral Asymptotics for More General Operators in One Dimension

  • Chapter
  • First Online:
  • 695 Accesses

Part of the book series: Pseudo-Differential Operators ((PDO,volume 14))

Abstract

In this chapter, we generalize the results of Chap. 3. The results and the main ideas are close, but not identical, to the ones of Hager (Ann Henri Poincaré 7(6):1035–1064, 2006). We will use some h-pseudodifferential machinery, see for instance Dimassi and Sjöstrand (Spectral Asymptotics in the Semi-classical Limit, London Mathematical Society Lecture Note Series, vol 268. Cambridge University Press, Cambridge, 1999).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. M. Dimassi, J. Sjöstrand, Spectral Asymptotics in the Semi-Classical Limit. London Mathematical Society Lecture Note Series, vol. 268 (Cambridge University Press, Cambridge, 1999)

    Google Scholar 

  2. V.I. Girko, Theory of Random Determinants. Mathematics and Its Applications (Kluwer Academic Publishers, Dordrecht, 1990)

    Book  Google Scholar 

  3. M. Hager, Instabilité spectrale semiclassique d’opérateurs non-autoadjoints. II. Ann. Henri Poincaré 7(6), 1035–1064 (2006)

    Article  MathSciNet  Google Scholar 

  4. M. Hager, J. Sjöstrand, Eigenvalue asymptotics for randomly perturbed non-selfadjoint operators. Math. Ann. 342(1), 177–243 (2008). http://arxiv.org/abs/math/0601381

    Article  MathSciNet  Google Scholar 

  5. B. Helffer, J. Sjöstrand, Multiple wells in the semiclassical limit. I. Commun. Partial Differ. Equ. 9(4), 337–408 (1984)

    Article  MathSciNet  Google Scholar 

  6. F. Hérau, J. Sjöstrand, C. Stolk, Semiclassical analysis for the Kramers-Fokker-Planck equation. Commun. PDE 30(5–6), 689–760 (2005)

    Article  MathSciNet  Google Scholar 

  7. M. Hitrik, Boundary spectral behavior for semiclassical operators in dimension one. Int. Math. Res. Not. 2004(64), 3417–3438 (2004)

    Article  MathSciNet  Google Scholar 

  8. A. Melin, J. Sjöstrand, Fourier Integral Operators with Complex-Valued Phase Functions. Fourier Integral Operators and Partial Differential Equations (Colloq. Internat., Univ. Nice, Nice, 1974), pp. 120–223. Lecture Notes in Mathematics, vol. 459 (Springer, Berlin, 1975)

    Google Scholar 

  9. B. Simon, Semiclassical analysis of low lying eigenvalues. I. Nondegenerate minima: asymptotic expansions. Ann. Inst. H. Poincaré Sect. A (N.S.) 38(3), 295–308 (1983), Erratum in ibid 40(2), 224 (1984)

    Google Scholar 

  10. M. Vogel, The precise shape of the eigenvalue intensity for a class of non-selfadjoint operators under random perturbations. Ann. Henri Poincaré 18(2), 435–517 (2017). http://arxiv.org/abs/1401.8134

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

5.A Appendix: Estimates on Determinants of Gaussian Random Matrices

5.A Appendix: Estimates on Determinants of Gaussian Random Matrices

In this appendix, we follow Section 7 in [56]. Consider first a random vector

$$\displaystyle \begin{aligned} u(\omega )^{\mathrm{t}}=(\alpha _1(\omega ),\ldots,\alpha _N(\omega ))\in{\mathbf{C}}^N, \end{aligned} $$
(5.A.1)

where α 1, …, α N are independent complex Gaussian random variables with a \({\mathcal {N}}_{\mathbf {C}}(0,1)\) law:

$$\displaystyle \begin{aligned} (\alpha _j)_*(P)={1\over \pi }e^{-\vert z\vert ^2}L(dz)=:f(z)L(dz). \end{aligned} $$
(5.A.2)

The distribution of u is

$$\displaystyle \begin{aligned} u_*(P)={1\over \pi ^N}e^{-\vert u\vert ^2}L_{{\mathbf{C}}^N}(du).\end{aligned} $$
(5.A.3)

If U : C N →C N is unitary, then Uu has the same distribution as u.

We next compute the distribution of |u(ω)|2. The distribution of |α j(ω)|2 is μ(r)dr, where

$$\displaystyle \begin{aligned}\mu (r)=-H(r){d\over dr}e^{-r}=e^{-r}H(r),\end{aligned} $$

and H(r) = 1[0,[(r). The Fourier transform of μ is given by

$$\displaystyle \begin{aligned}\widehat{\mu }(\rho )=\int_0^\infty e^{-r}e^{-ir\rho }dr={1\over 1+i\rho }.\end{aligned} $$

We have \(\vert u(\omega )\vert ^2=\sum _1^N \vert \alpha _j(\omega )\vert ^2\), and since |α j(ω)|2 are independent and identically distributed, the distribution of |u(ω)|2 is μ ∗⋯ ∗ μ dr = μ N dr, where ∗ indicates convolution. For r > 0, we get

$$\displaystyle \begin{aligned}\begin{aligned} \mu ^{*N}(r)&={1\over 2\pi }\int e^{ir\rho }{1\over (1+i\rho )^N}d\rho \\ &= {1\over (N-1)!2\pi }\int_\gamma e^{ir\rho }\left(-{1\over i}{d\over d\rho }\right)^{N-1}\left({1\over 1+i\rho }\right)d\rho\\ &= {1\over (N-1)!2\pi }\int_\gamma \left({1\over i}{d\over d\rho }\right)^{N-1}(e^{ir\rho })\left({1\over 1+i\rho}\right)d\rho\\ &= {r^{N-1}\over (N-1)!2\pi }\int_\gamma e^{ir\rho }{1\over 1+i\rho }d\rho \\ &={r^{N-1}e^{-r}\over (N-1)!},\vspace{-2pt} \end{aligned} \end{aligned}$$

where γ is a small simple positively oriented loop around the pole ρ = i. Hence

$$\displaystyle \begin{aligned} \mu ^{*N}dr={r^{N-1}e^{-r}\over (N-1)!}H(r) dr.\end{aligned} $$
(5.A.4)

Recall here that

$$\displaystyle \begin{aligned}\int_0^\infty r^{N-1}e^{-r}dr=\Gamma (N)=(N-1)!, \end{aligned}$$

so μ N is indeed normalized.

The expectation value E(|α j|2) = 〈|α j|2〉 of each |α j(ω)|2 is equal to 1, so

$$\displaystyle \begin{aligned} \langle \vert u(\omega )\vert ^2\rangle =N. \end{aligned} $$
(5.A.5)

We next estimate the probability that |u(ω)|2 is very large. It will be convenient to pass to the variable \(\ln (\vert u(\omega )\vert ^2)\) which has the distribution obtained from (5.A.4) by replacing r by \(t=\ln r\), so that r = e t, drr = dt. Thus \(\ln (\vert u(\omega )\vert ^2) \) has the distribution

$$\displaystyle \begin{aligned} {r^Ne^{-r}\over (N-1)!}H(r){dr\over r}={e^{Nt-e^t}\over (N-1)!}dt=:\nu _N(t)dt.\end{aligned} $$
(5.A.6)

Now consider a random matrix

$$\displaystyle \begin{aligned} \begin{pmatrix}u_1&u_2&\ldots&u_N \end{pmatrix}, \end{aligned} $$
(5.A.7)

where u k(ω) are random vectors in C N (here viewed as column vectors) of the form

$$\displaystyle \begin{aligned}u_k(\omega )^{\mathrm{t}}=(\alpha _{1,k}(\omega ),\ldots,\alpha _{N,k}(\omega )), \end{aligned}$$

and all the α j,k are independent with the same law (5.A.2).

Then

$$\displaystyle \begin{aligned} \det (u_1\,u_2\ldots u_N)=\det (u_1\,\widetilde{u}_2\ldots\widetilde{u}_N), \end{aligned} $$
(5.A.8)

where \(\widetilde {u}_j\) are obtained in the following way (assuming the u j to be linearly independent, as they are almost surely): \(\widetilde {u}_2\) is the orthogonal projection of u 2 in the orthogonal complement (u 1), \(\widetilde {u}_3\) is the orthogonal projection of u 3 in \((u_1,u_2)^\perp =(u_1,\widetilde {u_2})^\perp \), etc.

If u 1 is fixed, then \(\widetilde {u}_2\) can be viewed as a random vector in C N−1 of the type (5.A.1), (5.A.2), and with u 1, u 2 fixed, we can view \(\widetilde {u}_3\) as a random vector of the same type in C N−2, etc. On the other hand

$$\displaystyle \begin{aligned} \vert \det (u_1\, u_2\ldots u_N)\vert ^2=\vert u_1\vert ^2\vert \widetilde{u}_2\vert ^2\cdots\vert \widetilde{u}_N\vert ^2. \end{aligned} $$
(5.A.9)

The squared lengths \(\vert u_1\vert ^2 , \vert \widetilde {u}_2\vert ^2 ,\ldots ,\vert \widetilde {u}_N\vert ^2 \) are independent random variables with distributions μ N dr, μ ∗(N−1) dr, …, μdr. This reduction plays an important role in [47].

Taking the logarithm of (5.A.9), we get in the right-hand side a sum of independent random variables with distributions ν N dt, …, ν 1 dt, so the distribution of the random variable is equal to

$$\displaystyle \begin{aligned} (\nu _1*\nu _2*\cdots*\nu _N)dt, \end{aligned} $$
(5.A.10)

with ν j defined in (5.A.6).

We have

$$\displaystyle \begin{aligned}\nu _N(t)\le \widetilde{\nu }_N(t):={1\over (N-1)!}e^{Nt}. \end{aligned}$$

Choose x(N) ∈R such that

$$\displaystyle \begin{aligned} \int_{-\infty }^{x(N)}\widetilde{\nu }_N(t)dt=1. \end{aligned} $$
(5.A.11)

More explicitly, we have x(N) ≥ 0 and

$$\displaystyle \begin{aligned} {1\over N!}e^{Nx(N)}=1,\quad x(N)={1\over N}\ln (N!)={1\over N}\ln \Gamma (N+1). \end{aligned} $$
(5.A.12)

In [56] we used Stirling’s formula, to get

$$\displaystyle \begin{aligned} x(N)= \ln N+{1\over 2N}\ln N-1+{C_0\over N}+{\mathcal{O}}({1\over N^2}), \end{aligned} $$
(5.A.13)

where \(C_0=(\ln 2\pi )/2>0\). Here we shall not need the large-N limit.

With this choice of x(N), we put

$$\displaystyle \begin{aligned}\rho _N(t)=1_{]-\infty ,x(N)]}(t)\widetilde{\nu }_N(t), \end{aligned}$$

so that ρ N(t)dt is a probability measure “obtained from ν N(t)dt by transferring mass to the left”, in the sense that

$$\displaystyle \begin{aligned} \int f\nu _N dt\le \int f\rho _N dt, \end{aligned} $$
(5.A.14)

whenever f is a bounded decreasing function. Equivalently,

$$\displaystyle \begin{aligned}g*\nu _N\le g*\rho _N, \end{aligned}$$

when g is a bounded increasing function. Now, for such a g, both g ∗ ν N and g ∗ ρ N are bounded increasing functions, so by iteration,

$$\displaystyle \begin{aligned}g*\nu _1*\cdots*\nu _N\le g*\rho _1*\cdots*\rho _N. \end{aligned}$$

In particular, taking g = H we get

$$\displaystyle \begin{aligned} \int_{-\infty }^t \nu _1*\cdots*\nu _N(s)ds\le \int_{-\infty }^t \rho _1*\cdots *\rho _N(s)ds, \ t\in {\mathbf{R}}. \end{aligned} $$
(5.A.15)

We have by (5.A.12)

$$\displaystyle \begin{aligned} \begin{aligned} \widehat{\rho }_N(\tau )&=\int_{-\infty }^{x(N)}{1\over (N-1)!}e^{t(N-i\tau )}dt={1\over (N-1)!(N-i\tau )}e^{Nx(N)-ix(N)\tau} \\ &={e^{-ix(N)\tau }\over 1-i{\tau \over N}}. \end{aligned} \end{aligned} $$
(5.A.16)

This function has a pole at τ = −iN.

Similarly,

$$\displaystyle \begin{aligned} \widehat{1_{]-\infty ,a]}}(\tau )={i\over \tau +i0}e^{-ia\tau }. \end{aligned} $$
(5.A.17)

By Parseval’s formula, we get

$$\displaystyle \begin{aligned}\displaystyle \int_{-\infty }^a \rho _1*\cdots*\rho _Ndt={1\over 2\pi }\int_{-\infty }^\infty {\mathcal{F}}(\rho _1*\cdots*\rho _N)(\tau )\overline{\mathcal{F}1_{]-\infty ,a]}}(\tau )d\tau \\\displaystyle ={1\over 2\pi }\int_{-\infty }^{+\infty }e^{-i\tau (\sum_1^Nx(j)-a)}{-i\over \tau -i0}\prod_1^N{1\over 1-{i\tau \over j}}d\tau . \end{aligned} $$

We deform the contour to ℑτ = −1∕2 (half-way between R and the first pole in the lower half-plane). It follows that for \(a\le \sum _1^N x(j):\)

$$\displaystyle \begin{aligned} \int_{-\infty }^a \rho _1*\cdots*\rho _Ndt\le C(N)e^{a/2}. \end{aligned} $$
(5.A.18)

In view of (5.A.15), the right-hand side is an upper bound for the probability that \(\ln \vert \det (u_1\cdots u_N)\vert ^2\le a\). Hence, for a ≤ 0,

$$\displaystyle \begin{aligned}{\mathbf{P}}(\ln \vert \det (u_1\ldots u_N)\vert ^2\le a)\le C(N)e^{a/2}. \end{aligned} $$
(5.A.19)

We shall next extend our probabilistic bounds to determinants of the form

$$\displaystyle \begin{aligned}\det (D+Q) \end{aligned}$$

where Q = (u 1u N) is as before, and D = (d 1d N) is a fixed complex N × N matrix. As before, we can write

$$\displaystyle \begin{aligned}\vert \det ((d_1+u_1)\ldots(d_N+u_N))\vert ^2=\vert d_1+u_1\vert ^2\vert \widetilde{d}_2+\widetilde{u}_2\vert ^2\cdots \vert \widetilde{d}_N+\widetilde{u}_N\vert ^2, \end{aligned}$$

where \(\widetilde {d}_2=\widetilde {d}_2(u_1)\), \(\widetilde {u}_2=\widetilde {u}_2(u_1,u_2)\) are the orthogonal projections of d 2, u 2 on (d 1 + u 1), \(\widetilde {d}_3=\widetilde {d}_3(u_1,u_2)\), \(\widetilde {u}_3=\widetilde {u}_3(u_1,u_2,u_3)\) are the orthogonal projections of d 3, u 3 on (d 1 + u 1, d 2 + u 2), and so on.

Let \(\nu _d^{(N)}(t)dt\) be the probability distribution of \(\ln \vert d+u\vert ^2 \), when d ∈C N is fixed and u ∈C N is random as in (5.A.1), (5.A.2). Notice that \(\nu _0^{(N)}(t)=\nu ^{(N)}(t)\) is the density we have already studied.

Lemma 5.A.1

For every a R , we have

$$\displaystyle \begin{aligned}\int_{-\infty }^a \nu _d^{(N)}(t)dt\le \int_{-\infty }^a \nu ^{(N)}(t)dt.\end{aligned}$$

Proof

Equivalently, we have to show that \({\mathbf {P}}(\vert d+u\vert ^2\le \widetilde {a})\le {\mathbf {P}}(\vert u\vert ^2\le \widetilde {a})\) for every \(\widetilde {a}>0\). For this, we may assume that d = (c, 0, …, 0), c > 0. We then only have to prove that

$$\displaystyle \begin{aligned}{\mathbf{P}}(\vert c+\Re u_1\vert ^2\le b^2)\le {\mathbf{P}}(\vert \Re u_1\vert ^2\le b^2),\ b>0, \end{aligned}$$

and here we may replace P by the corresponding probability density

$$\displaystyle \begin{aligned}\mu (t)dt={1\over {\sqrt{\pi }}}e^{-t^2}dt \end{aligned}$$

for \(\Re \mu _1\). Thus, we have to show that

$$\displaystyle \begin{aligned} {1\over \sqrt{\pi }}\int_{\vert c+t\vert \le b}e^{-t^2}dt\le {1\over \sqrt{\pi }}\int_{\vert t\vert \le b}e^{-t^2}dt . \end{aligned} $$
(5.A.20)

Fix b and rewrite the left-hand side as

$$\displaystyle \begin{aligned}I(c)={1\over \sqrt{\pi }}\int_{-b-c}^{b-c}e^{-t^2}dt. \end{aligned}$$

The derivative satisfies

$$\displaystyle \begin{aligned}I'(c)={1\over {\sqrt{\pi }}}(e^{-(b+c)^2}-e^{-(b-c)^2})\le 0. \end{aligned}$$

hence cI(c) is decreasing and (5.A.20) follows, since it holds when c = 0. □

Now consider the probability that . If χ a(t) = H(a − t), this probability becomes

$$\displaystyle \begin{aligned}\begin{aligned} & \int \cdots\int {\mathbf{P}}(du_1)\cdots {\mathbf{P}}(du_N)\chi _a\big(\ln \vert d_1+u_1\vert ^2+\ln \vert \widetilde{d}_2(u_1)+\widetilde{u}_2(u_1,u_2)\vert ^2+\cdots\\ &+\ln \vert \widetilde{d}_N(u_1,\cdots ,u_{N-1})+\widetilde{u}_N(u_1,\cdots,u_N)\vert ^2\big). \end{aligned}\end{aligned}$$

Here we first carry out the integration with respect to u N, noticing that with the other u 1, …, u N−1 fixed, we may consider \(\widetilde {d}_N(u_1,\ldots ,u_{N-1})\) as a fixed vector in C ≃ (d 1 + u 1, …, d N−1 + u N−1) and \(\widetilde {u}_N\) as a random vector in C. Using also the lemma, we get

We next estimate the u N−1- integral in the same way, and so on. Eventually, we get

Proposition 5.A.2

Under the assumptions above,

In particular, the estimate (5.A.19) extends to random perturbations of constant matrices:

(5.A.21)

Notes

In this chapter we have generalized the results of Chap. 3 and tried to explain the role of the points ρ in the cotangent space where \(i^{-1}\{ p, \overline {p} \}(\rho )\) is positive or negative. In higher dimension this simple reduction via a Grushin seems to break down. The results are close to the ones in [55], but we have not included the case of multiplicative random perturbations since we will deal with that case in later chapters, directly in higher dimensions. It was convenient to use some elements from later works, like singular values. In particular, in Sect. 5.A, we have followed [56].

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sjöstrand, J. (2019). Spectral Asymptotics for More General Operators in One Dimension. In: Non-Self-Adjoint Differential Operators, Spectral Asymptotics and Random Perturbations. Pseudo-Differential Operators, vol 14. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-10819-9_5

Download citation

Publish with us

Policies and ethics