1 Introduction and main results

This paper is concerned with a boundary value problem associated to the conformal deformation of metrics. Let \(\mathbb{B}^{n}\) be the unit ball of \(\mathbb{R}^{n}\), \(n\geq3\), equipped with its Euclidean metric \(g_{0}\). Its boundary \(\mathbb{S}^{n-1}\) is endowed with the standard metric, still denoted by \(g_{0}\). Given a function \(H: \mathbb {S}^{n-1}\longrightarrow\mathbb{R}\), we study the problem of finding a conformal metric \(g= u^{\frac{4}{n-2}}g_{0}\) whose scalar curvature vanishes in \(\mathbb{B}^{n}\) and the corresponding mean curvature on \(\mathbb{S}^{n-1}\) is H. More precisely, we investigate the existence of solutions of the following nonlinear PDE with the Sobolev trace critical exponent:

$$ \left \{ \textstyle\begin{array}{l@{\quad}l} \Delta u = 0 & \hbox{in } \mathbb{B}^{n},\\ \frac{\partial u}{\partial\nu} + \frac{n-2}{2}u = \frac {n-2}{2}H u^{\frac{n}{n-2}} & \hbox{on } \mathbb{S}^{n-1}, \end{array}\displaystyle \right . $$
(1.1)

where ν is the outward unit normal vector on \(\mathbb{S}^{n-1}\) with respect to the metric \(g_{0}\).

Equation (1.1) has a variational structure. There is a correspondence between the solutions of (1.1) and the positive critical points of the Euler-Lagrange functional J associated to problem (1.1) defined in Section 2 of this paper. Due to the presence of the critical exponent in the second equation of (1.1), the functional J fails to satisfy the Palais-Smale condition. From the variational view point, it is the occurrence of the loss of compactness and blow-up phenomena. Such a fact follows from the noncompactness of the trace Sobolev embedding \(H^{1}(\mathbb {B}^{n})\hookrightarrow L^{\frac{2(n-1)}{n-2}}(S^{n-1})\).

Besides the obvious necessary condition that H must be positive somewhere, there is a Kazdan-Warner-type obstruction to solve the problem; see [1]. Many works where devoted to the problem trying to understand under what conditions on H equation (1.1) is solvable. See [2] and [3] for \(n=3\), [4] and [5] for \(n=4\), and [612] for higher-dimensional cases. For related problems, we refer to [6, 1322].

Abdelhedi and Chtioui [23] gave an existence result to problem (1.1) in dimension \(n\geq4\) through an Euler-Hopf criterium reminiscent to the one given by Li [20] for the prescribed scalar curvature problem on \(S^{n}\), \(n\geq3\). Their main assumption is the so-called β-flatness condition. Namely, let \(H: \mathbb {S}^{n-1}\rightarrow\mathbb{R}\) be a \(C^{1}\) positive function. We say that H satisfies the β-flatness condition \((f)_{\beta}\): for each critical point y of H, there exists a real number \(\beta= \beta(y)\) such that, in some geodesic normal coordinates centered at y, we have

$$ H(x) = H(y) + \sum_{i=1}^{n-1}b_{i} \bigl\vert (x-y)_{i} \bigr\vert ^{\beta}+ R(x-y), $$
(1.2)

where \(b_{i}=b_{i}(y)\in\mathbb{R}^{*}\), \(\sum_{i=1}^{n}b_{i}\neq0\), and \(\sum_{s=0}^{[\beta]}\vert \nabla^{s}R(z)\vert \vert z\vert ^{-\beta+s}=o(1)\) as z goes to zero. Here \(\nabla^{s}\) denotes all possible derivatives of order s, and \([\beta]\) is the integer part of β.

Set

$$\mathcal{K}= \bigl\{ y\in S^{n-1}, \nabla H(y)=0 \bigr\} $$

and, for any \(y\in\mathcal{K}\), denote

$$\widetilde{i}(y)= \sharp \bigl\{ b_{k}(y), 1\leq k \leq n-1, \mbox{ s.t. } b_{k}(y)< 0 \bigr\} . $$

Then, (1.1) has a solution, provided that

$$n-2< \beta< n-1 \quad \mbox{and} \quad \sum_{y\in\mathcal {K}^{+}}(-1)^{\widetilde{i}(y)} \neq(-1)^{n-1}, $$

where \(\mathcal{K}^{+}= \{y\in\mathcal{K}, \sum_{k=1}^{n-1} b_{k}(y)<0 \}\); see [23].

This result was extended in [24] for \(n-2\leq\beta< n-1\), in [25] for \(1<\beta\leq n-2\), and in [10] for \(1<\beta \leq n-1\) with an additional assumption that H is close to 1.

Aiming to include a larger class of functions H in the existence results for (1.1), we continue in this paper our study of problem (1.1) under \((f)_{\beta}\)-condition. We are interested here in the case of \(\beta>n-1\). We extend the computation of [23] and [10] to the order \(\beta> n-1\). As an application, we describe the lack of compactness of the problem and provide some existence results for some cases of β. More precisely, we prove the following theorems.

Theorem 1.1

Let \(H: S^{n-1}\rightarrow\mathbb{R}\), \(n\geq3\), be a \(C^{1}\)-positive function satisfying \((f)_{\beta}\)-condition. There exists a positive constant η such that if

$$n-2< \beta< (n-1)+ \eta \quad \textit{and}\quad \sum_{y\in\mathcal {K}^{+}}(-1)^{\widetilde{i}(y)} \neq(-1)^{n-1}, $$

then (1.1) has a solution. Moreover, for generic H, we have

$$\mathcal{N}\geq \biggl\vert (-1)^{n-1}- \sum _{y\in\mathcal {K}^{+}}(-1)^{\widetilde{i}(y)} \biggr\vert . $$

Here \(\mathcal{N}\) denotes the number of solutions of (1.1).

Theorem 1.2

Let \(H: S^{n-1}\rightarrow\mathbb{R}\), \(n\geq3\), be a \(C^{1}\)-function satisfying \((f)_{\beta}\)-condition and close to 1. There exists a positive constant η such that if

$$1< \beta< (n-1)+ \eta \quad \textit{and} \quad \sum_{y\in\mathcal {K}^{+}}(-1)^{\widetilde{i}(y)} \neq(-1)^{n-1}, $$

then (1.1) has a solution. Moreover, if we assume that \(\beta> \frac{n-2}{2}\), then for generic H, we get

$$\mathcal{N}\geq \biggl\vert (-1)^{n-1}- \sum _{y\in\mathcal {K}^{+}}(-1)^{\widetilde{i}(y)} \biggr\vert . $$

Our method to prove Theorems 1.1 and 1.2 is based on the techniques related to the critical points at infinity theory of Bahri [26]. In Section 2, we state some preliminaries that prepare the field to apply the approach of Bahri. In Section 3, we perform an expansion at infinity of the gradient vector field of J extending that performed in [23] and [10] to any order \(\beta>n-1\). In Section 4, we describe the concentration phenomenon of the problem and characterize the critical points at infinity associated with (1.1). Lastly, in Section 5, we provide the proofs of Theorems 1.1 and 1.2.

2 Preliminary tools

The Euler-Lagrange functional associated with (1.1) is

$$J(u)= \biggl( \int_{S^{n-1}}H u^{\frac{2(n-1)}{n-2}} \,d \sigma_{g_{0}} \biggr)^{\frac{2-n}{n-1}}, $$

defined on Σ, the unit sphere of \(H^{1}(\mathbb{B}^{n})\) equipped with the norm

$$\Vert u\Vert ^{2}= \int_{\mathbb{B}^{n}}\vert \nabla u\vert ^{2} \,dv_{g_{0}} + \frac {n-2}{2} \int_{S^{n-1}} u^{2} \,d\sigma_{g_{0}}. $$

Problem (1.1) is equivalent to finding critical points of J subjected to the constraint \(u\in\Sigma^{+}= \{u\in\Sigma, u\geq 0\}\). The functional J does not satisfy the Palais-Smale condition on \(\Sigma^{+}\). The next proposition characterizes the sequences failing the Palais-Smale condition. By a stereographic projection through an appropriate point in \(S^{n-1}\) we can reduce the problem to \(\mathbb {R}^{n}_{+} =\{x=(x', x_{n})\in\mathbb{R}^{n}, x_{n}> 0\}\). Therefore, we will next identify the function H and its composition with the stereographic projection π, and we will also identify a point \(x \in\mathbb{B}^{n}\) by its image by π. See [4], p.1316, for the expansion of π. For \(a\in\partial\mathbb{R}^{n}_{+}\) and \(\lambda>0\), let

$$\widetilde{\delta}_{(a, \lambda)}(x)= c_{0}\frac{\lambda^{\frac {n-2}{2}}}{ ((1+\lambda x_{n})^{2} + \lambda^{2}\vert x'-a\vert ^{2} )^{\frac{n-2}{2}}}, $$

where \(x\in\mathbb{R}^{n}_{+}\), and \(c_{0}\) is chosen such that \(\widetilde {\delta}_{(a, \lambda)}\) satisfies

$$ \left \{ \textstyle\begin{array}{l@{\quad}l} \Delta u = 0 \quad \mbox{and}\quad u>0 & \mbox{on } \mathbb{R}^{n}_{+},\\ -\frac{\partial u}{\partial x_{n}} = u^{\frac{n}{n-2}} & \hbox{on } \partial\mathbb{R}^{n}_{+}. \end{array}\displaystyle \right . $$
(2.1)

Let \({\delta}_{(a, \lambda)} \) be the pull-back of \(\widetilde {\delta}_{(a, \lambda)}\) by the stereographic projection. For \(\varepsilon>0\) and \(p \in\mathbb{N}^{*}\), let us define

$$V(p,\varepsilon)= \textstyle\begin{cases}u \in\Sigma / \exists\ a_{1},\ldots,a_{p}\in S^{n-1}, \exists \lambda_{1},\ldots, \lambda_{p}>\varepsilon^{-1},\\ \exists \alpha_{1},\ldots, \alpha_{p}>0 \mbox{ satisfying }\Vert u- \sum_{i=1}^{p} \alpha_{i} \delta_{ (a_{i}, \lambda_{i})}\Vert < \varepsilon, \\ \vert \frac{\alpha_{i}^{\frac{2}{n-2}} H(a_{i})}{\alpha_{j}^{\frac {2}{n-2}} H(a_{j})}-1\vert < \varepsilon \ \forall i \mbox{ and } \varepsilon_{ij}< \varepsilon\ \forall i\neq j, \end{cases} $$

where \(\varepsilon_{ij}= [ \frac{ \lambda_{i}}{ \lambda _{j}}+ \frac{ \lambda_{j}}{ \lambda _{i}}+ \lambda_{i} \lambda_{j} \vert a_{i}- a_{j}\vert ^{2} ]^{\frac{2-n}{2}}\). If w is a solution of (1.1), then we also define \(V(p, \varepsilon, w)\) as

$$V(p, \varepsilon, w)= \bigl\{ u\in\Sigma, \mbox{ s.t. } \exists\alpha _{0}>0 \mbox{ satisfying } u-\alpha_{0}w \in V(p, \varepsilon) \mbox{ and } \bigl\vert \alpha_{0}J(u)^{\frac{n-1}{2}}-1 \bigr\vert < \varepsilon \bigr\} . $$

Proposition 2.1

[14, 27]

Let \((u_{k})\) be a sequence in \(\Sigma^{+}\) such that \(J(u_{k})\) is bounded and \(\partial J(u_{k})\) goes to zero. Then there exist an integer \(p \in\mathbb{N}^{*}\), a sequence \((\varepsilon_{k}) >0\) such that \(\varepsilon_{k}\) tends to zero, and an extracted subsequence of \((u_{k})\), again denoted \((u_{k})\), such that \(u_{k} \in V(p,\varepsilon_{k}, w)\) for all \(k\in\mathbb{N}\).

Here w is a solution of (1.1) or zero with \(V(p, \varepsilon, 0)= V(p, \varepsilon)\).

For \(u\in V(p, \varepsilon, w)\), we can find an optimal representation. Namely, we have the following:

Proposition 2.2

[14, 26]

For any \(p \in\mathbb{N}^{*}\), there is \(\varepsilon_{p}>0\) such that if \(\varepsilon\leq\varepsilon_{p}\) and \(u\in V(p,\varepsilon,w)\), then the minimization problem

$$\mathop{\min_{\alpha_{i}>0,\lambda_{i}>0,a_{i}\in S^{n-1}}}_{h\in T_{w} (W_{u}(w) )} \Biggl\Vert u -\sum_{i=1}^{p} \alpha_{i} \delta_{(a_{i},\lambda_{i})}-\alpha_{0}(w+h) \Biggr\Vert , $$

has a unique solution \((\alpha,\lambda,a,h)\), up to a permutation.

In particular, we can write u as follows:

$$u=\sum_{i=1}^{p}\alpha_{i} \delta_{(a_{i}, \lambda_{i})}+\alpha_{0} (w+h)+v, $$

where v belongs to \(H^{1}(\mathbb {B}^{n})\cap T_{w}(W_{s}(w))\) and satisfies \((V_{0})\), \(T_{w}(W_{u}(w))\) and \(T_{w}(W_{s}(w))\) are the tangent spaces at w of the unstable and stable manifolds of w for a decreasing pseudo-gradient of J (see [14] for the definitions), and \((V_{0})\) is the following:

$$(V_{0}): \textstyle\begin{cases} \langle v ,\psi \rangle=0 &\mbox{for } \psi\in\{ \delta_{i},\frac{ \partial \delta_{i}}{ \partial\lambda_{i}}, \frac{ \partial\delta_{i}}{\partial a_{i}},i=1,\ldots,p\},\\ \langle v,w \rangle=0,\\ \langle v,h \rangle=0 & \mbox{for all } h\in T_{w}W_{u}(w), \end{cases} $$

where \(\delta_{i} =\delta_{(a_{i},\lambda_{i})}\), and \(\langle\cdot,\cdot\rangle\) denotes the scalar product defined on \(H^{1}(\mathbb {B}^{n}) \) by

$$\langle u, v\rangle= \int_{\mathbb{B}^{n}}\nabla u\nabla v \,dv_{g_{0}} + \frac{n-2}{2} \int_{\mathbb{S}^{n-1}}uv \,d\sigma_{g_{0}}. $$

Notice that Proposition 2.2 is also true if we take \(w=0\) and, therefore, \(h=0\) and u is in \(V(p,\varepsilon)\).

We also have the following Morse lemma, which completely gets rid of the v-contributions and shows that they can be neglected with respect to the concentration phenomenon.

Proposition 2.3

[14]

There is a \(\mathcal{C}^{1}\)-map that to each \((\alpha_{i}, a_{i}, \lambda_{i}, h)\) such that \(\sum_{i=1}^{p} \alpha_{i} \delta_{(a_{i},\lambda_{i})}+\alpha_{0}(w+h)\) belongs to \(V(p, \varepsilon, w)\) associates \(\overline{v}=\overline{v}(\alpha, a, \lambda, h)\) such that is unique and satisfies

$$J \Biggl(\sum_{i=1}^{p} \alpha_{i}\delta_{(a_{i}, \lambda_{i})}+\alpha_{0}(w+h)+\overline {v} \Biggr)=\min_{v \in(V_{0})} \Biggl\{ J \Biggl(\sum _{i=1}^{p} \alpha_{i} \delta_{(a_{i}, \lambda_{i})}+\alpha_{0}(w+h)+v \Biggr) \Biggr\} . $$

Moreover,there exists a change of variables \(v-\overline{v}\rightarrow V\) such that

$$J \Biggl(\sum_{i=1}^{p} \alpha_{i}\delta_{(a_{i}, \lambda_{i})}+\alpha_{0}(w+h)+v \Biggr)=J \Biggl(\sum_{i=1}^{p} \alpha_{i} \delta_{(a_{i}, \lambda_{i})}+\alpha_{0}(w+h)+\overline {v} \Biggr)+\Vert V \Vert ^{2}. $$

At the end of this section, we give the definition of critical point at infinity.

Definition 2.4

[26]

A critical point at infinity of Jon \(\Sigma^{+}\) is a limit of a flow line \(u(s)\) of the equation

$$\textstyle\begin{cases} \frac{\partial u}{\partial s}=-\partial J(u(s)),\\ u(0)=u_{0}, \end{cases} $$

such that \(u(s)\) remains in \(V(p,\varepsilon(s),w)\) for \(s\geq s_{0}\). Here w is either zero or a solution of (1.1), and \(\varepsilon(s)\) is a positive function tending to zero as \(s\rightarrow+\infty\). Using Proposition 2.2, we can write \(u(s)\) as

$$u(s)=\sum_{i=1}^{p}\alpha_{i}(s) \delta_{( a_{i}(s),\lambda_{i}(s))}+\alpha_{0}(s) \bigl(w+h(s) \bigr)+v(s). $$

Denoting \(\widetilde{\alpha}_{i}:=\lim_{s \longrightarrow +\infty} \alpha_{i}(s)\) and \(\widetilde{y}_{i}:=\lim_{s \longrightarrow+ \infty} a_{i}(s)\), we denote by

$$\sum_{i=1}^{p}\widetilde{\alpha}_{i}\delta_{( \widetilde{y}_{i},\infty)}+\widetilde{\alpha}_{0}w \quad \mbox{or} \quad (\widetilde{y}_{1},\ldots,\widetilde{y}_{p},w)_{\infty} $$

such a critical point at infinity. If \(w\neq0\), then it is said to be of w-type.

3 Asymptotic expansions

In this section, we expand the gradient of J near infinity under the assumption that H satisfies \((f)_{\beta}\)-condition. We provide precise estimates of this expansion for any flatness order \(\beta>n-1\) and improve the previous estimates given in [23] and [10] for \(\beta\leq n-1\). These estimates will be useful to describe the lack of compactness of the problem and so to characterize the critical points at infinity of J. Next, we will write \(\delta_{i}\) instead of \(\delta_{(a_{i},\lambda_{i})}\).

Proposition 3.1

Let \(u=\sum_{j=1}^{p} \alpha_{j} \delta_{j} \in V(p, \varepsilon)\). For any i, \(1\leq i \leq p\) such that \(a_{i}\in B(y_{\ell_{i}}, \rho)\), \(y_{\ell_{i}}\in\mathcal{K}\) with \(\beta(y_{\ell_{i}})>n-1\), we have the following two expansions:

$$\begin{aligned} \mathrm{(i)}\quad \biggl\langle \partial J(u),\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \biggr\rangle =& - 2c_{2} J(u) \sum _{j \neq i} \alpha_{j} \lambda_{i} \frac{\partial \varepsilon_{i j}}{\partial \lambda_{i}} + O \Biggl(\sum_{j=2}^{n-2} \frac{\vert a_{i}- y_{\ell _{i}}\vert ^{\beta-j}}{\lambda_{i}^{j}} \Biggr) \\ &{}+ O \biggl( \frac{\log\lambda_{i}}{\lambda_{i}^{n-1}} \biggr) + o \biggl( \sum _{j \neq i} \varepsilon_{i j} \biggr). \end{aligned}$$
(3.1)

Moreover, if \(\lambda_{i}^{n-1} \vert a_{i}-y_{\ell_{i}}\vert ^{\beta}< \delta \), where δ is a fixed very small positive constant, then

$$\begin{aligned} \mathrm{(ii)}\quad \biggl\langle \partial J(u),\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \biggr\rangle =& - 2c_{2} J(u) \sum _{j \neq i} \alpha_{j} \lambda_{i} \frac{\partial \varepsilon_{i j}}{\partial \lambda_{i}} \\ &{}+ c_{1}\alpha_{i}^{\frac{n}{n-2}} J(u)^{\frac{2n-3}{n-2}} \biggl[\frac{(\sum_{k=1}^{n-1}b_{k})}{(\beta-(n-1))\lambda_{i}^{n-1}} + O \biggl( \frac{1}{\lambda_{i}^{n-1}} \biggr) \biggr]. \end{aligned}$$
(3.2)

Here \(c_{1}\) and \(c_{2}\) are two positive constants.

Proof

Let \(u=\sum_{j=1}^{p} \alpha_{j} \delta_{j} \in V(p, \varepsilon)\). Using (3.3), (3.4), and (3.5) of [10], we have

$$\begin{aligned} \biggl\langle \partial J(u),\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \biggr\rangle =& 2J(u) \biggl[-c_{2} \sum_{j \neq i} \alpha_{j} \lambda_{i}\frac{\partial \varepsilon_{i j}}{\partial \lambda_{i}} - \alpha_{i}^{\frac{n}{n-2}} J(u)^{\frac{n-1}{n-2}} \int _{S^{n-1}} H(x) \delta_{i}^{\frac{n}{n-2}} \lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \,d\sigma_{g_{0}} \biggr] \\ &{}+ o \biggl( \sum_{j \neq i} \varepsilon_{i j} \biggr). \end{aligned}$$
(3.3)

It remains to expand

$$I= \int_{S^{n-1}} H(x) \delta_{i}^{\frac{n}{n-2}} \lambda_{i} \frac {\partial \delta_{i}}{\partial\lambda_{i}} = \int_{S^{n-1}} \bigl(H(x)-H(a_{i}) \bigr) \delta_{i}^{\frac{n}{n-2}}\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} $$

since \(\int_{S^{n-1}} \delta_{i}^{\frac{n}{n-2}}\lambda_{i} \frac {\partial \delta_{i}}{\partial\lambda_{i}} =0\). Let \(\mu>0\) be such that \(B(a_{i}, \mu)\subset B(y_{\ell_{i}}, \rho)\). Then

$$I = \int_{B(a_{i}, \mu)} \bigl(H(x)-H(a_{i}) \bigr) \delta_{i}^{\frac {n}{n-2}}\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} + O \biggl(\frac{1}{\lambda _{i}^{n-1}} \biggr). $$

Expanding H around \(a_{i}\), we get

$$ H(x)-H(a_{i})= \sum_{j=1}^{n-1} \frac{D^{j}H(a_{i})(x-a_{i})^{j}}{j!} + O \bigl(\vert x-a_{i}\vert ^{\min\{\beta, n\}} \bigr). $$
(3.4)

We then have

$$H(x)-H(a_{i})= \sum_{j=1}^{n-2} \frac{D^{j}H(a_{i})(x-a_{i})^{j}}{j!} + O \bigl(\vert x-a_{i}\vert ^{n-1} \bigr). $$

Observe that, after stereographic projection,

$$\delta_{i}^{\frac{n}{n-2}}\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} = \frac{n-2}{2} c_{0}^{\frac {2(n-1)}{n-2}} \lambda_{i}^{n-1} \frac{1-\lambda _{i}^{2}\vert x-a_{i}\vert ^{2}}{(1+\lambda_{i}^{2}\vert x-a_{i}\vert ^{2})^{n}}. $$

The change of variables \(z=\lambda_{i}(x-a_{i})\) yields

$$\begin{aligned} I =& \frac{n-2}{2} c_{0}^{\frac{2(n-1)}{n-2}} \sum _{j=1}^{n-2} \int _{B(0, \lambda_{i} \mu)}\frac{D^{j} H(a_{i}) (z)^{j}}{j!\lambda_{i}^{j}} \frac {1-\vert z\vert ^{2}}{(1+\vert z\vert ^{2})^{n}} \,dz \\ &{}+ O \biggl( \int_{B(0, \lambda_{i} \mu)}\frac{\vert z\vert ^{n-1}}{\lambda _{i}^{n-1}} \frac{\vert 1-\vert z\vert ^{2}\vert }{(1+\vert z\vert ^{2})^{n}} \,dz \biggr) + O \biggl(\frac {1}{\lambda_{i}^{n-1}} \biggr). \end{aligned}$$

Observe that

$$\int_{B(0, \lambda_{i} \mu)} DH(a_{i}) (z) \frac{1-\vert z\vert ^{2}}{(1+\vert z\vert ^{2})^{n}} \,dz=0, $$

and, under \((f)_{\beta}\)-condition, for any \(j=1, 2, \ldots, n-1\),

$$ \bigl\vert D^{j} H(a_{i}) \bigr\vert = O \bigl(\vert a_{i}- y_{\ell_{i}}\vert ^{\beta-j} \bigr). $$
(3.5)

Therefore,

$$I= O \Biggl( \sum_{j=2}^{n-2} \frac{\vert a_{i}- y_{\ell_{i}}\vert ^{\beta-j} }{\lambda_{i}^{j}} \Biggr) + O \biggl(\frac{\log\lambda_{i}}{\lambda _{i}^{n-1}} \biggr) + O \biggl( \frac{1}{\lambda_{i}^{n-1}} \biggr). $$

Hence, claim (i) of Proposition 3.1 follows. To prove (ii), we expand the integral I as follows:

$$\begin{aligned} I =& \int_{S^{n-1}} \bigl(H(x)-H(y_{\ell_{i}}) \bigr) \delta_{i}^{\frac {n}{n-2}}\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \\ =& \int_{B(a_{i}, \mu)} \bigl(H(x)-H(y_{\ell_{i}}) \bigr) \delta_{i}^{\frac {n}{n-2}}\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} + O \biggl(\sup_{S^{n-1}} \bigl\vert H(x)-H(y_{\ell_{i}}) \bigr\vert \int_{c_{B(a_{i}, \mu)}} \biggl\vert \delta_{i}^{\frac {n}{n-2}} \lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \biggr\vert \biggr). \end{aligned}$$

Observe that

$$\begin{aligned} O \biggl(\sup_{S^{n-1}} \bigl\vert H(x)-H(y_{\ell_{i}}) \bigr\vert \int_{c_{B(a_{i}, \mu )}} \biggl\vert \delta_{i}^{\frac{n}{n-2}} \lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \biggr\vert \biggr) = &O \biggl( \frac{\sup_{S^{n-1}}\vert H(x)-H(y_{\ell_{i}})\vert }{\lambda_{i}^{n-1}} \biggr) \\ =& o \biggl(\frac{1}{\lambda_{i}^{n-1}} \biggr) \end{aligned}$$
(3.6)

as H is close to a constant. Moreover, by \((f)_{\beta}\)-condition we get

$$\begin{aligned} I =& \frac{n-2}{2} c_{0}^{\frac{2(n-1)}{n-2}} \int_{B(a_{i}, \mu)} \sum_{k=1}^{n-1} b_{k} \bigl\vert (x-y_{\ell_{i}})_{k} \bigr\vert ^{\beta}\frac{1-\lambda _{i}^{2}\vert x-a_{i}\vert ^{2}}{(1+\lambda_{i}^{2}\vert x-a_{i}\vert ^{2})^{n}} \lambda_{i}^{n-1} \,dx \\ &{}+ o \biggl( \int_{B(a_{i}, \mu)} \vert x-y_{\ell_{i}}\vert ^{\beta}\frac{1-\lambda _{i}^{2}\vert x-a_{i}\vert ^{2}}{(1+\lambda_{i}^{2}\vert x-a_{i}\vert ^{2})^{n}} \lambda_{i}^{n-1} \,dx \biggr)+ o \biggl( \frac{1}{\lambda_{i}^{n-1}} \biggr). \end{aligned}$$

After the change of variables \(z= \lambda_{i}(x-a_{i})\),

$$\begin{aligned} I =& \frac{n-2}{2} c_{0}^{\frac{2(n-1)}{n-2}} \frac{1}{\lambda_{i}^{\beta}} \sum_{k=1}^{n-1} \int_{B(0, \lambda_{i} \mu)} b_{k} \bigl\vert z_{k}+ \lambda _{i}(a_{i}-y_{\ell_{i}})_{k} \bigr\vert ^{\beta}\frac{1-\vert z\vert ^{2}}{(1+\vert z\vert ^{2})^{n}} \,dz \\ &{} + o \biggl(\frac{1}{\lambda_{i}^{\beta}} \int_{B(0, \lambda_{i} \mu)} \vert z\vert ^{\beta}\frac{\vert 1-\vert z\vert ^{2}\vert }{(1+\vert z\vert ^{2})^{n}} \,dz \biggr) + o \biggl(\vert a_{i}-y_{\ell_{i}}\vert ^{\beta}\int_{\mathbb{R}^{n}} \frac {\vert 1-\vert z\vert ^{2}\vert }{(1+\vert z\vert ^{2})^{n}} \,dz \biggr) + o \biggl( \frac{1}{\lambda _{i}^{n-1}} \biggr) \\ =&\frac{n-2}{2} c_{0}^{\frac{2(n-1)}{n-2}} \frac{1}{\lambda_{i}^{\beta}} \sum_{k=1}^{n-1} b_{k} \int_{B(0, \lambda_{i} \mu)} \vert z_{k}\vert ^{\beta}\frac {1-\vert z\vert ^{2}}{(1+\vert z\vert ^{2})^{n}} \,dz \\ &{}+ O \biggl( \frac{\vert a_{i}-y_{\ell_{i}}\vert }{\lambda _{i}^{\beta-1}} \int_{B(0, \lambda_{i} \mu)} \vert z\vert ^{\beta-1} \frac {\vert 1-\vert z\vert ^{2}\vert }{(1+\vert z\vert ^{2})^{n}} \,dz \biggr) + O \biggl(\vert a_{i}-y_{\ell_{i}}\vert ^{\beta}\int_{\mathbb{R}^{n}} \frac {\vert 1-\vert z\vert ^{2}\vert }{(1+\vert z\vert ^{2})^{n}} \,dz \biggr) \\ &{} + o \biggl( \frac{1}{\lambda_{i}^{\beta}} \int_{B(0, \lambda_{i} \mu)} \vert z\vert ^{\beta}\frac {\vert 1-\vert z\vert ^{2}\vert }{(1+\vert z\vert ^{2})^{n}} \,dz \biggr) + o \biggl(\frac{1}{\lambda _{i}^{n-1}} \biggr). \end{aligned}$$

Elementary computation shows that

$$\int_{B(0, \lambda_{i} \mu)} \vert z_{k}\vert ^{\beta}\frac{1-\vert z\vert ^{2}}{(1+\vert z\vert ^{2})^{n}} \,dz = - \frac{c}{(\beta-(n-1))\lambda_{i}^{n-1-\beta}} +O(1), $$

where c is a positive constant (since \(\beta>n-1\)) independent of k, and thus

$$\int_{B(0, \lambda_{i} \mu)} \vert z\vert ^{\beta-1} \frac {\vert 1-\vert z\vert ^{2}\vert }{(1+\vert z\vert ^{2})^{n}} \,dz = \left \{ \textstyle\begin{array}{l@{\quad}l} O (\frac{1}{\lambda_{i}^{n-\beta}} ) + O(1)& \hbox{if } \beta\neq n,\\ O (\log\lambda_{i} ) & \hbox{if } \beta= n. \end{array}\displaystyle \right . $$

Hence,

$$\begin{aligned} I =&- c_{1} \frac{\sum_{k=1}^{n-1} b_{k}}{(\beta-(n-1))\lambda_{i}^{n-1}} + O \biggl( \frac{\vert a_{i}-y_{\ell_{i}}\vert }{\lambda_{i}^{n-1}} + \frac {\vert a_{i}-y_{\ell_{i}}\vert }{\lambda_{i}^{\beta-1}} \biggr) \\ &{} + O \biggl( \frac{\vert a_{i}-y_{\ell_{i}}\vert \log\lambda_{i}}{\lambda_{i}^{\beta -1}}, \mbox{if } \beta=n \biggr) + O \bigl(\vert a_{i}-y_{\ell_{i}}\vert ^{\beta}\bigr) + o \biggl( \frac{1}{\lambda_{i}^{n-1}} \biggr). \end{aligned}$$

In the case where \(\lambda_{i}^{n-1}\vert a_{i}-y_{\ell_{i}}\vert ^{\beta}<\delta\), δ very small, we have

$$\begin{aligned}& O \bigl(\vert a_{i}-y_{\ell_{i}}\vert ^{\beta}\bigr)= o \biggl(\frac{1}{\lambda _{i}^{n-1}} \biggr), \quad \mbox{taking $\delta$ small enough}, \\ & O \biggl( \frac{\vert a_{i}-y_{\ell_{i}}\vert }{\lambda_{i}^{n-1}} \biggr) = o \biggl(\frac{1}{\lambda_{i}^{n-1 + \frac{n-1}{\beta}}} \biggr)=o \biggl(\frac {1}{\lambda_{i}^{n-1}} \biggr), \\ & O \biggl( \frac{\vert a_{i}-y_{\ell_{i}}\vert }{\lambda_{i}^{\beta-1}} \biggr) = o \biggl(\frac{1}{\lambda_{i}^{\beta-1 + \frac{n-1}{\beta}}} \biggr)=o \biggl(\frac{1}{\lambda_{i}^{n-1}} \biggr)\quad \mbox{since } \beta>n-1, \\ & O \biggl( \frac{\vert a_{i}-y_{\ell_{i}}\vert \log\lambda_{i}}{\lambda_{i}^{\beta -1}} \biggr)= o \biggl(\frac{\log\lambda_{i}}{\lambda_{i}^{n-1 + \frac {n-1}{\beta}}} \biggr)=o \biggl(\frac{1}{\lambda_{i}^{n-1}} \biggr). \end{aligned}$$

This concludes the proof of Proposition 3.1. □

Proposition 3.2

Let \(u=\sum_{j=1}^{p} \alpha_{j} \delta_{j} \in V(p, \varepsilon)\). For any i, \(1\leq i \leq p\), such that \(a_{i}\in B(y_{\ell_{i}}, \rho)\), \(y_{\ell_{i}}\in\mathcal{K}\) with \(\beta(y_{\ell_{i}})>n-1\), we have the following expansion:

$$\begin{aligned}& \biggl\langle \partial J(u),\frac{1}{\lambda_{i}} \frac{\partial \delta_{i}}{\partial(a_{i})_{k}} \biggr\rangle \\& \quad = - \frac{c_{3}}{\lambda_{i}} \alpha_{i}^{\frac{n}{n-2}}J(u)^{\frac {2n-3}{n-2}} b_{k} \operatorname {sign}(a_{i}-y_{\ell_{i}})_{k} \bigl\vert (a_{i}-y_{\ell _{i}})_{k} \bigr\vert ^{\beta-1} + o \biggl(\frac{\vert a_{i}- y_{\ell_{i}}\vert ^{\beta -1}}{\lambda_{i}} \biggr) \\& \quad\quad {}+ O \Biggl(\sum_{j=2}^{n-2} \frac{\vert a_{i}- y_{\ell_{i}}\vert ^{\beta -j}}{\lambda_{i}^{j}} \Biggr) + o \biggl(\frac{1}{\lambda_{i}^{\gamma}} \biggr) + O \biggl( \sum _{j \neq i} \varepsilon_{i j} \biggr) \end{aligned}$$
(3.7)

for any \(\gamma\in(n-1, \min\{\beta, n\})\). Here \(c_{3}\) is a positive constant, and \(a_{i_{k}}\), \(k=1, \ldots, n-1\), is the kth component of \(a_{i}\) in some geodesic coordinate system.

Proof

The proof follows from the following expansion:

$$\begin{aligned} I =& \int_{S^{n-1}} H(x) \delta_{i}^{\frac{n}{n-2}} \frac{1}{\lambda_{i} } \frac{\partial \delta_{i}}{\partial(a_{i})_{k}} \,d\sigma_{g_{0}} \\ =& \int_{B(a_{i}, \mu)} \bigl(H(x)-H(a_{i}) \bigr) \delta_{i}^{\frac{n}{n-2}}\frac {1}{\lambda_{i} } \frac{\partial \delta_{i}}{\partial(a_{i})_{k}} + \int_{c_{B(a_{i}, \mu)}} \bigl( H(x)-H(a_{i}) \bigr) \frac{1}{\lambda_{i} }\delta_{i}^{\frac{n}{n-2}} \frac {\partial \delta_{i}}{\partial(a_{i})_{k}}. \end{aligned}$$

After stereographic projection,

$$\delta_{i}^{\frac{n}{n-2}}\frac{1}{\lambda_{i} } \frac{\partial \delta_{i}}{\partial(a_{i})_{k}} = (n-2) c_{0}^{\frac{2(n-1)}{n-2}}\frac {\lambda_{i}^{n}(x-a_{i})_{k}}{(1+\lambda_{i}^{2}\vert x-a_{i}\vert ^{2})^{n}}. $$

Thus,

$$\int_{c_{B(a_{i}, \mu)}} \bigl( H(x)-H(a_{i}) \bigr) \frac{1}{\lambda_{i} } \delta _{i}^{\frac{n}{n-2}}\frac{\partial \delta_{i}}{\partial(a_{i})_{k}}= O \biggl(\frac{1}{\lambda_{i}^{n}} \biggr)= o \biggl(\frac {1}{\lambda_{i}^{\gamma}} \biggr), \quad \forall \gamma< n. $$

Using now expansion (3.4) of H around \(a_{i}\), we obtain

$$H(x)-H(a_{i})= \sum_{j=1}^{n-1} \frac{D^{j}H(a_{i})(x-a_{i})^{j}}{j!} + o \bigl(\vert x-a_{i}\vert ^{\gamma} \bigr) $$

for any \(n-1< \gamma< \min\{n, \beta\}\). Therefore,

$$\begin{aligned} I =& (n-2) c_{0}^{\frac{2(n-1)}{n-2}}\sum _{j=1}^{n-1} \int_{B(0, \lambda _{i}\mu)} \frac{D^{j}H(a_{i})(z)^{j}}{j!\lambda_{i}^{j}} \frac{z_{k}}{(1+\vert z\vert ^{2})^{n}}\,dz \\ &{} + o \biggl( \int_{\mathbb{R}^{n}} \frac{\vert z\vert ^{\gamma+1}}{\lambda _{i}^{\gamma}(1+\vert z\vert ^{2})^{n}}\,dz \biggr) + o \biggl( \frac{1}{\lambda_{i}^{\gamma}} \biggr) \end{aligned}$$

by taking \(z= \lambda_{i}(x-a_{i})\). Observe now that

$$\begin{aligned} \int_{B(0, \lambda_{i}\mu)} \frac{DH(a_{i})(z) z_{k}}{(1+\vert z\vert ^{2})^{n}} \,dz =& \sum _{j=1}^{n-1}\frac{\partial H}{\partial x_{j}}(a_{i}) \int_{B(0, \lambda_{i}\mu)} \frac{z_{j} z_{k}}{(1+\vert z\vert ^{2})^{n}} \,dz \\ = &\frac{\partial H}{\partial x_{k}}(a_{i}) \biggl[ \int_{\mathbb {R}^{n-1}} \frac{z_{k}^{2}}{(1+\vert z\vert ^{2})^{n}} \,dz + O \biggl( \frac{1}{\lambda _{i}^{n-1}} \biggr) \biggr] \\ =& c \frac{\partial H}{\partial x_{k}}(a_{i}) + O \biggl( \frac {1}{\lambda_{i}^{n-1}} \biggr). \end{aligned}$$

Using \((f)_{\beta}\)-condition, we have

$$ \frac{\partial H}{\partial x_{k}}(a_{i})= b_{k} \beta \operatorname {sign}(a_{i}-y_{\ell_{i}})_{k} \bigl\vert (a_{i}-y_{\ell_{i}})_{k} \bigr\vert ^{\beta-1} + o \bigl(\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1} \bigr). $$
(3.8)

Using (3.5) and (3.8), we obtain

$$\begin{aligned} I =& c_{3} b_{k} \operatorname {sign}(a_{i}-y_{\ell_{i}})_{k} \frac{\vert (a_{i}-y_{\ell _{i}})_{k}\vert ^{\beta-1}}{\lambda_{i}} \\ &{}+ o \biggl(\frac{\vert a_{i}- y_{\ell _{i}}\vert ^{\beta-1}}{\lambda_{i}} \biggr) +O \Biggl(\sum _{j=2}^{n-2}\frac{\vert a_{i}- y_{\ell_{i}}\vert ^{\beta-j}}{\lambda _{i}^{j}} \Biggr) + o \biggl( \frac{1}{\lambda_{i}^{\gamma}} \biggr). \end{aligned}$$

Hence, Proposition 3.2 follows. □

The next propositions deal with the case of \(\beta\leq n-1\). We improve here the expansions given in [23] and [10].

Proposition 3.3

Let \(u=\sum_{j=1}^{p} \alpha_{j} \delta_{j} \in V(p, \varepsilon)\). For any i, \(1\leq i \leq p\), such that \(a_{i}\in B(y_{\ell_{i}}, \rho)\), \(y_{\ell_{i}}\in\mathcal{K}\) with \(\beta(y_{\ell_{i}})=n-1\), we have the following two expansions:

$$\begin{aligned} \mathrm{(i)} \quad \biggl\langle \partial J(u),\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \biggr\rangle =& - 2c_{2} J(u) \sum _{j \neq i} \alpha_{j} \lambda_{i} \frac{\partial \varepsilon_{i j}}{\partial \lambda_{i}} + O \Biggl(\sum_{j=2}^{n-2} \frac{\vert a_{i}- y_{\ell _{i}}\vert ^{\beta-j}}{\lambda_{i}^{j}} \Biggr) \\ &{}+ O \biggl( \frac{\log\lambda_{i}}{\lambda_{i}^{n-1}} \biggr) + o \biggl( \sum _{j \neq i} \varepsilon_{i j} \biggr). \end{aligned}$$
(3.9)

Moreover, if \(\lambda_{i} \vert a_{i}-y_{\ell_{i}}\vert \) is bounded, then we have

$$\begin{aligned} \mathrm{(ii)} \quad \biggl\langle \partial J(u), \lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \biggr\rangle =& - 2c_{2} J(u) \sum _{j \neq i} \alpha_{j} \lambda_{i} \frac{\partial \varepsilon_{i j}}{\partial \lambda_{i}} \\ &{}+ c_{1}\alpha_{i}^{\frac{n}{n-2}} J(u)^{\frac{2n-3}{n-2}} \frac{(\sum_{k=1}^{n-1}b_{k})\log\lambda_{i}}{\lambda_{i}^{\beta}} + o \biggl( \frac{\log\lambda_{i}}{\lambda_{i}^{\beta}} \biggr). \end{aligned}$$
(3.10)

Proof

The proof follows from the previous arguments and [10]. □

Proposition 3.4

Let \(u=\sum_{j=1}^{p} \alpha_{j} \delta_{j} \in V(p, \varepsilon)\). For any i, \(1\leq i \leq p\), such that \(a_{i}\in B(y_{\ell_{i}}, \rho)\), \(y_{\ell_{i}}\in\mathcal{K}\) with \(\beta(y_{\ell_{i}})=n-1\), we have the following expansion:

$$\begin{aligned}& \biggl\langle \partial J(u), \frac{1}{\lambda_{i}} \frac {\partial \delta_{i}}{\partial(a_{i})_{k}} \biggr\rangle \\ & \quad = - \frac{c_{3}}{\lambda_{i}} \alpha_{i}^{\frac{n}{n-2}}J(u)^{\frac {2n-3}{n-2}} b_{k} \operatorname {sign}(a_{i}-y_{\ell_{i}})_{k} \bigl\vert (a_{i}-y_{\ell _{i}})_{k} \bigr\vert ^{\beta-1} + o \biggl(\frac{\vert a_{i}- y_{\ell_{i}}\vert ^{\beta -1}}{\lambda_{i}} \biggr) \\ & \quad\quad {}+ O \Biggl(\sum_{j=2}^{n-2} \frac{\vert a_{i}- y_{\ell_{i}}\vert ^{\beta -j}}{\lambda_{i}^{j}} \Biggr) + O \biggl(\frac{1}{\lambda_{i}^{n-1}} \biggr) + O \biggl( \sum _{j \neq i} \varepsilon_{i j} \biggr). \end{aligned}$$
(3.11)

Proof

The proof proceeds as that of Proposition 3.2. □

Proposition 3.5

Let \(u=\sum_{j=1}^{p} \alpha_{j} \delta_{j} \in V(p, \varepsilon)\). For any i, \(1\leq i \leq p\), such that \(a_{i}\in B(y_{\ell_{i}}, \rho)\), \(y_{\ell_{i}}\in\mathcal{K}\) with \(\beta(y_{\ell_{i}})< n-1\), we have:

$$\begin{aligned} \mathrm{(i)} \quad \biggl\langle \partial J(u),\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \biggr\rangle =& - 2c_{2} J(u) \sum _{j \neq i} \alpha_{j} \lambda_{i} \frac{\partial \varepsilon_{i j}}{\partial \lambda_{i}} + O \Biggl(\sum_{j=2}^{[\beta]} \frac{\vert a_{i}- y_{\ell _{i}}\vert ^{\beta-j}}{\lambda_{i}^{j}} \Biggr) \\ &{}+ O \biggl( \frac{1}{\lambda_{i}^{\beta}} \biggr) + o \biggl( \sum _{j \neq i} \varepsilon_{i j} \biggr). \end{aligned}$$
(3.12)

Moreover, if \(\lambda_{i} \vert a_{i}-y_{\ell_{i}}\vert < \delta\), where δ is a fixed very small positive constant, then

$$\begin{aligned} \mathrm{(ii)} \quad \biggl\langle \partial J(u),\lambda_{i} \frac{\partial \delta_{i}}{\partial\lambda_{i}} \biggr\rangle =& - 2c_{2} J(u) \sum _{j \neq i} \alpha_{j} \lambda_{i} \frac{\partial \varepsilon_{i j}}{\partial \lambda_{i}} \\ &{}+ c\alpha_{i}^{\frac{n}{n-2}} J(u)^{\frac{2n-3}{n-2}} \frac{(\sum_{k=1}^{n-1}b_{k})}{\lambda_{i}^{\beta}} + o \biggl( \frac{1}{\lambda_{i}^{\beta}} \biggr) + o \biggl( \sum _{j \neq i} \varepsilon_{i j} \biggr). \end{aligned}$$
(3.13)

Proof

The proof follows from the proof of Proposition 3.1 and [10]. □

Proposition 3.6

Under the assumption of Proposition  3.5, we have:

$$\begin{aligned} \mathrm{(i)} \quad \biggl\langle \partial J(u), \frac{1}{\lambda_{i}} \frac {\partial \delta_{i}}{\partial(a_{i})_{k}} \biggr\rangle =& - \frac{c_{3}}{\lambda_{i}} \alpha_{i}^{\frac{n}{n-2}}J(u)^{\frac {2n-3}{n-2}} b_{k} \operatorname {sign}(a_{i}-y_{\ell_{i}})_{k} \bigl\vert (a_{i}-y_{\ell _{i}})_{k} \bigr\vert ^{\beta-1} \\ &{}+ O \Biggl(\sum_{j=2}^{[\beta]} \frac{\vert a_{i}- y_{\ell_{i}}\vert ^{\beta -j}}{\lambda_{i}^{j}} \Biggr) + O \biggl(\frac{1}{\lambda_{i}^{\beta}} \biggr) + O \biggl( \sum _{j \neq i} \varepsilon_{i j} \biggr). \end{aligned}$$
(3.14)

Moreover, if \(\lambda_{i} \vert a_{i}-y_{\ell_{i}}\vert \) is bounded, then we have

$$\begin{aligned} \mathrm{(ii)} \quad \biggl\langle \partial J(u), \frac{1}{\lambda_{i}} \frac {\partial \delta_{i}}{\partial(a_{i})_{k}} \biggr\rangle =& - c \alpha_{i}^{\frac {n}{n-2}}J(u)^{\frac{2n-3}{n-2}} \frac{b_{k}}{\lambda_{i}} \int_{\mathbb {R}^{n-1}} \bigl\vert z_{k}+ \lambda_{i}(a_{i}-y_{\ell_{i}})_{k} \bigr\vert ^{\beta} \\ &{}\times\frac{z_{k}}{(1+\vert z\vert ^{2})^{n}}\,dz + o \biggl(\frac{1}{\lambda _{i}^{\beta}} \biggr) + O \biggl( \sum_{j \neq i} \varepsilon_{i j} \biggr). \end{aligned}$$
(3.15)

Proof

The proof follows from that of Proposition 3.2 and [10]. □

4 Critical points at infinity

Using the estimates of the gradient vector field \((\partial J)\) obtained in Section 3, we characterize in this section the critical points at infinity associated with problem (1.1) under \((f)_{\beta}\)-condition. First, we rule out the existence of critical points at infinity in \(V(p, \varepsilon)\), \(p\geq2\).

Theorem 4.1

Let H be a positive \(C^{1}\)-function on \(S^{n-1}\), \(n\geq3\), satisfying \((f)_{\beta}\)-condition. There exists \(\eta>0\) such that if

$$n-2< \beta< (n-1)+\eta, $$

then the potential sets \(V(p, \varepsilon)\), \(p\geq2\), do not contain any critical points at infinity.

Proof

The proof is an immediate consequence of the following proposition. □

Proposition 4.2

Let H be a positive \(C^{1}\)-function on \(S^{n-1}\), \(n\geq3\), satisfying \((f)_{\beta}\)-condition. There exists \(\eta>0\) such that if \(n-2<\beta<(n-1)+\eta\), then there exists a pseudo-gradient \(W_{1}\) in \(V(p,\varepsilon)\), \(p\geq2\), such that, for any \(u= \sum_{i=1}^{p}\alpha_{i} \delta_{i}\in V(p, \varepsilon)\), we have:

$$\begin{aligned}& \mathrm{(i)} \quad \bigl\langle \partial J(u),W_{1}(u) \bigr\rangle \leq-c \Biggl( \sum_{i=1}^{p} \frac{1}{\lambda_{i}^{\beta}}+ \sum_{i=1}^{p} \frac{\vert \nabla H(a_{i})\vert }{\lambda_{i}} +\sum_{j \neq i} \varepsilon_{ij} \Biggr), \\& \mathrm{(ii)} \quad \biggl\langle \partial J(u+\overline{v}), W_{1}(u)+ \frac{\partial\overline{v}}{\partial(\alpha_{i},a_{i},\lambda _{i})} \bigl(W_{1}(u) \bigr) \biggr\rangle \leq-c \Biggl( \sum_{i=1}^{p} \frac{1}{\lambda_{i}^{\beta}}+ \sum_{i=1}^{p} \frac{\vert \nabla H(a_{i})\vert }{\lambda_{i}} +\sum_{j \neq i} \varepsilon_{ij} \Biggr). \end{aligned}$$

Here c is a positive constant independent of u. Moreover, \(\vert W_{1} \vert \) is bounded, and the maximum of \(\lambda_{i}\), \(1 \leq i \leq p\), decreases along the flow-lines of \(W_{1}\).

Proof

Let \(u= \sum_{i=1}^{p}\alpha_{i} \delta_{i}\in V(p, \varepsilon)\), \(p\geq 2\). We order the \(\lambda_{i}\). Without loss of generality, we can assume that \(\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{p}\) and \(a_{i}\in B(y_{\ell_{i}}, \rho)\), \(y_{\ell_{i}}\in\mathcal{K}\), \(\forall i=1, \ldots, p\). For each index i, we denote by \(Z_{i}(u)\) and \(X_{i}(u)\) the vector fields

$$Z_{i}(u)= \lambda_{i} \frac{ \partial\delta_{i}}{\partial\lambda_{i}} \quad \mbox{and} \quad X_{i}(u) = \sum_{k=1}^{n-1} b_{k} \operatorname {sign}(a_{i} - y_{\ell_{i}})_{k} \frac{1}{\lambda_{i}}\frac{\partial {\delta}_{i}}{\partial(a_{i})_{k}}. $$

We then have the following lemmas.

Lemma 4.3

For any \(i=2, \ldots, p\),

$$\bigl\langle \partial J(u), Z_{i}(u) \bigr\rangle = -2c_{2}J(u)\sum_{j\neq i}\alpha_{j} \lambda_{i}\frac{\partial\varepsilon _{ij}}{\partial\lambda_{i}} + o \biggl(\sum _{j\neq i}\varepsilon _{ij} \biggr)+ o \biggl( \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1}}{\lambda _{i}} \biggr). $$

Proof

Using the expansions of Propositions 3.1, 3.3, and 3.5, for all \(i = 2, \ldots, p\) and any \(\beta>n-2\), we have

$$ \frac{1}{\lambda_{i}^{\beta}}= o (\varepsilon_{1i} ) \quad \mbox{as } \lambda_{i}\rightarrow+\infty. $$
(4.1)

Indeed,

$$\frac{ 1}{\lambda_{i}^{\beta}} \varepsilon_{1i}^{-1}= \frac{ 1}{\lambda_{i}^{\beta}} \biggl(\frac{\lambda_{i}}{\lambda_{1}}+ \frac {\lambda_{1}}{\lambda_{i}} + \lambda_{1}\lambda_{i}\vert a_{i}-a_{1} \vert ^{2} \biggr)^{\frac {n-2}{2} }\leq c \frac{ \lambda_{i}^{n-2}}{\lambda_{i}^{\beta}}. $$

Concerning \(\frac{ \log\lambda_{i}}{\lambda_{i}^{n-1}}\), which appears in the case \(\beta\geq n-1\), we have

$$ \frac{ \log\lambda_{i}}{\lambda_{i}^{n-1}}= o (\varepsilon _{1i} ) \quad \mbox{as } \lambda_{i}\rightarrow+\infty. $$
(4.2)

Indeed,

$$\frac{ \log\lambda_{i}}{\lambda_{i}^{n-1}} \varepsilon_{1i}^{-1}\leq c \frac{ \log\lambda_{i}}{\lambda_{i}}. $$

Last, we discuss the term \(O (\sum_{j\geq2}\frac {\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-j}}{\lambda_{i}^{j}} )\), which appears in all the cases of \(\beta>1\), in three cases.

  • If \(\beta>n-1\) and \(\lambda_{i}^{n-1}\vert a_{i}-y_{\ell_{i}}\vert ^{\beta }\geq{\delta}\), then we have

    $$ O \Biggl(\sum_{j=2}^{n-2} \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-j}}{\lambda _{i}^{j}} \Biggr)= o \biggl(\frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1}}{\lambda _{i}} \biggr) \quad \mbox{as } \lambda\rightarrow+\infty. $$
    (4.3)

    Indeed,

    $$\begin{aligned} \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-j}}{\lambda_{i}^{j}}\frac{\lambda _{i}}{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1}} =& \frac{1}{(\lambda_{i}\vert a_{i}-y_{\ell _{i}}\vert )^{j-1}} \\ \leq & \biggl(\frac{1}{\delta} \biggr)^{\frac{j-1}{\beta}} \frac {1}{(\lambda_{i})^{(j-1)(\beta-(n-1))}}. \end{aligned}$$
  • If \(\beta=n-1\) and \(\lambda_{i}\vert a_{i}-y_{\ell_{i}}\vert \geq\frac {1}{\delta}\), then we have

    $$ O \Biggl(\sum_{j=2}^{n-2} \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-j}}{\lambda _{i}^{j}} \Biggr)= o \biggl(\frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1}}{\lambda _{i}} \biggr) \quad \mbox{ as } \delta \mbox{ is small}. $$
    (4.4)
  • If \(\beta< n-1\) and \(\lambda_{i}\vert a_{i}-y_{\ell_{i}}\vert \geq{\delta}\), then it is easy to see that if \(\lambda_{i}\vert a_{i}-y_{\ell_{i}}\vert \geq\frac {1}{\delta}\), then we have

    $$ O \Biggl(\sum_{j=2}^{[\beta]} \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta -j}}{\lambda_{i}^{j}} \Biggr)= o \biggl(\frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta -1}}{\lambda_{i}} \biggr) \quad \mbox{as } \delta \mbox{ is small}, $$
    (4.5)

    and if \(\delta\leq\lambda_{i}\vert a_{i}-y_{\ell_{i}}\vert \leq\frac{1}{\delta }\), then we have

    $$ O \Biggl(\sum_{j=2}^{[\beta]} \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta -j}}{\lambda_{i}^{j}} \Biggr)= O \biggl(\frac{1}{\lambda_{i}^{\beta}} \biggr)= o ( \varepsilon_{1i} )\quad \mbox{by (4.1)}. $$
    (4.6)

This concludes the proof of Lemma 4.3. □

Lemma 4.4

For any \(i= 1, \ldots, p\),

$$\bigl\langle \partial J(u), X_{i}(u) \bigr\rangle \leq -c \frac {\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1}}{\lambda_{i}}+ o \biggl(\frac{1}{\lambda _{i}^{n-1}} \biggr) + O \biggl(\sum _{j\neq i}\varepsilon_{ij} \biggr). $$

Proof

It follows from the expansions of Propositions 3.2, 3.4, and 3.6 and from estimates (4.1)-(4.6). □

Lemma 4.5

Let \(m>0\) be a small constant. Then

$$\Biggl\langle \partial J(u), \sum_{i=2}^{p} \bigl(-2^{i} Z_{i}(u) + m X_{i}(u) \bigr) \Biggr\rangle \leq -c \Biggl( \sum_{i=2}^{p} \frac{1}{\lambda_{i}^{n-1}}+ \sum_{i=2}^{p} \frac{\vert \nabla H(a_{i})\vert }{\lambda_{i}} +\sum_{j \neq i} \varepsilon_{ij} \Biggr). $$

Proof

Using Lemmas 4.3 and 4.4 and (4.1), we get

$$\Biggl\langle \partial J(u), \sum_{i=2}^{p} \bigl(-2^{i} Z_{i}(u) + m X_{i}(u) \bigr) \Biggr\rangle \leq c \Biggl[ \sum_{i=2}^{p}\sum _{j\neq i}2^{i} \lambda _{i} \frac{\partial \varepsilon_{ij}}{\partial\lambda_{i}} - \sum_{i=2}^{p} \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1}}{\lambda_{i}} \Biggr] + o \biggl(\sum_{j \neq i} \varepsilon_{ij} \biggr), $$

taking m small enough. Moreover, for \(1\leq i< j \leq p\), we have

$$2^{i}\lambda_{i} \frac{\partial \varepsilon_{ij}}{\partial\lambda_{i}} + 2^{j} \lambda_{j} \frac{\partial \varepsilon_{ij}}{\partial\lambda _{j}}\leq-c\varepsilon_{ij}. $$

Therefore,

$$\Biggl\langle \partial J(u), \sum_{i=2}^{p} \bigl(-2^{i} Z_{i}(u) + m X_{i}(u) \bigr) \Biggr\rangle \leq -c \Biggl(\sum_{j \neq i} \varepsilon_{ij}+ \sum_{i=2}^{p} \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta -1}}{\lambda_{i}} \Biggr). $$

Now, using (4.1), we can replace \(-\sum_{j \neq i} \varepsilon_{ij}\) by \(-\sum_{i=2}^{p}\frac{1}{\lambda _{i}^{\beta}}\). This concludes the proof of Lemma 4.5 since

$$ \bigl\vert \nabla H(a_{i}) \bigr\vert \sim \vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1}. $$
(4.7)

 □

Now, we must add the index 1. Let ψ be the following cut-off function:

$$\begin{aligned}& \psi: \mathbb{R} \longrightarrow \mathbb{R} \\& t \longmapsto \psi(t)= \left \{ \textstyle\begin{array}{l@{\quad}l} 1 & \hbox{if } \vert t\vert < \frac{\delta}{2},\\ 0 & \hbox{if } \vert t\vert \geq{\delta}. \end{array}\displaystyle \right . \end{aligned}$$

Lemma 4.6

There exists \(\eta>0\) such that, for any \(i=1, \ldots, p\) satisfying \(a_{i}\in B(y_{\ell_{i}}, \rho)\), \(y_{\ell_{i}}\in\mathcal{K}\) with \(n-1<\beta<n-1+\eta\), we have

$$\begin{aligned}& \Biggl\langle \partial J(u), \psi \bigl(\lambda_{i}^{n-1} \vert a_{i}-y_{\ell _{i}}\vert ^{\beta} \bigr) \Biggl(- \sum_{k=1}^{n-1}b_{k} \Biggr) Z_{i}(u) + X_{i}(u) \Biggr\rangle \\& \quad \leq -c \biggl( \frac{1}{\lambda_{i}^{\beta}}+ \frac{\vert \nabla H(a_{i})\vert }{\lambda_{i}} \biggr) + O \biggl( \sum_{j \neq i} \varepsilon_{ij} \biggr). \end{aligned}$$

Proof

If \(\lambda_{i}^{n-1}\vert a_{i}-y_{\ell_{i}}\vert ^{\beta}\leq\frac{\delta }{2}\), then in the second expansion of Proposition 3.1, we have

$$O \biggl(\frac{1}{\lambda_{i}^{n-1}} \biggr)= o \biggl(\frac{\sum_{k=1}^{n-1}b_{k}}{(\beta-(n-1))\lambda_{i}^{n-1}} \biggr) $$

by taking \(0< (\beta-(n-1))< \eta\) with η small enough. Therefore, we get

$$ \Biggl\langle \partial J(u), \Biggl(-\sum _{k=1}^{n-1}b_{k} \Biggr) Z_{i}(u) \Biggr\rangle \leq - \frac{c}{\lambda_{i}^{n-1}}+ O \biggl(\sum _{j \neq i} \varepsilon_{ij} \biggr). $$
(4.8)

Hence, Lemma 4.6 follows in this case from Lemma 4.4 and from (4.7) and (4.8).

In the case where \(\lambda_{i}^{n-1}\vert a_{i}-y_{\ell_{i}}\vert ^{\beta }\geq\frac{\delta}{2}\), using the expansion of Proposition 3.2 and (4.3), we obtain

$$ \bigl\langle \partial J(u), X_{i}(u) \bigr\rangle \leq -c \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1}}{\lambda_{i}} + o \biggl(\frac {1}{\lambda_{i}^{\gamma}} \biggr)+ O \biggl(\sum _{j \neq i} \varepsilon_{ij} \biggr), $$
(4.9)

where γ is any real in \((n-1, \min\{\beta, n\})\).

Choosing γ in \((\frac{n\beta-(n-1)}{\beta}, \min\{\beta , n\})\), we then have

$$\frac{1}{\lambda_{i}^{\gamma}} \frac{\lambda_{i}}{\vert a_{i}-y_{\ell _{i}}\vert ^{\beta-1}}\leq c \frac{1}{\lambda_{i}^{\frac{\beta(\gamma-n)+ (n-1)}{\beta}}}= o(1). $$

Thus,

$$\begin{aligned} \bigl\langle \partial J(u), X_{i}(u) \bigr\rangle \leq& -c \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1}}{\lambda_{i}}+ O \biggl(\sum_{j \neq i} \varepsilon_{ij} \biggr) \\ \leq& -\frac{c}{2} \biggl( \frac{\vert a_{i}-y_{\ell_{i}}\vert ^{\beta-1}}{\lambda_{i}} + \frac {1}{\lambda_{i}^{\beta}} \biggr)+ O \biggl(\sum_{j \neq i} \varepsilon_{ij} \biggr) \end{aligned}$$
(4.10)

since

$$\frac{1}{\lambda_{i}^{\beta}}= o \biggl( \frac{\vert a_{i}-y_{\ell _{i}}\vert ^{\beta-1}}{\lambda_{i}} \biggr). $$

Hence, Lemma 4.6 follows from the first expansion of Proposition 3.1 and from (4.3), (4.7), and (4.10). □

Lemma 4.7

For any \(i=1, \ldots, p\) such that \(a_{i}\in B(y_{\ell_{i}}, \rho)\), \(y_{\ell_{i}}\in\mathcal{K}\) with \(1<\beta\leq n-1\), we have

$$\Biggl\langle \partial J(u), \psi \bigl(\lambda_{i}\vert a_{i}-y_{\ell_{i}}\vert \bigr) \Biggl(-\sum _{k=1}^{n-1}b_{k} \Biggr) Z_{i}(u) + X_{i}(u) \Biggr\rangle \leq -c \biggl( \frac{1}{\lambda_{i}^{\beta}}+ \frac{\vert \nabla H(a_{i})\vert }{\lambda_{i}} \biggr) + O \biggl(\sum_{j \neq i} \varepsilon_{ij} \biggr). $$

Proof

We refer the reader to the proof of identity (4.3) in [10]. □

Corollary 4.8

For any \(i=1, \ldots, p\) such that \(a_{i}\in B(y_{\ell_{i}}, \rho)\), \(y_{\ell_{i}}\in\mathcal{K}\) with \(1<\beta(y_{\ell_{i}}) < n-1+\eta\), denote

$$\begin{aligned}& Y_{i}(u)= \psi\bigl(\lambda_{i}^{n-1}\vert a_{i}-y_{\ell_{i}}\vert ^{\beta}\bigr) \Biggl(-\sum _{k=1}^{n-1}b_{k}\Biggr) Z_{i}(u) + X_{i}(u)\quad \textit{if }\beta(y_{\ell_{i}}) \in(n-1, n-1+\eta), \\& Y_{i}(u)= \psi\bigl(\lambda_{i}\vert a_{i}-y_{\ell_{i}} \vert \bigr) \Biggl(-\sum_{k=1}^{n-1}b_{k} \Biggr) Z_{i}(u) + X_{i}(u)\quad \textit{if } \beta(y_{\ell_{i}})\in(1, n-1]. \end{aligned}$$

Then we have:

$$\bigl\langle \partial J(u), Y_{i}(u) \bigr\rangle \leq -c \biggl( \frac{1}{\lambda_{i}^{\beta}}+ \frac{\vert \nabla H(a_{i})\vert }{\lambda_{i}} \biggr) + O \biggl(\sum _{j \neq i} \varepsilon_{ij} \biggr). $$

Now, if \(\lambda_{1}<<\lambda_{2}\), then let

$$W_{1}(u)= \sum_{i=2}^{p} \bigl( Z_{i}(u) + m X_{i}(u) \bigr) +m \bigl(Y_{1}(u) \bigr). $$

By Lemmas 4.5 and Corollary 4.8, for m small enough, we obtain

$$\bigl\langle \partial J(u), W_{1}(u) \bigr\rangle \leq-c \Biggl( \sum _{i=1}^{p} \frac{1}{\lambda_{i}^{\beta}}+ \sum _{i=1}^{p} \frac{\vert \nabla H(a_{i})\vert }{\lambda_{i}} +\sum _{j \neq i} \varepsilon_{ij} \Biggr). $$

If \(\lambda_{1}\sim\lambda_{2}\), then let

$$W_{1}(u)= \sum_{i=2}^{p} \bigl( Z_{i}(u) + m X_{i}(u) \bigr) + m X_{1}(u). $$

By Lemma 4.4, Lemma 4.5, and (4.7) we get

$$\bigl\langle \partial J(u), W_{1}(u) \bigr\rangle \leq-c \Biggl( \sum _{i=1}^{p} \frac{1}{\lambda_{i}^{\beta}}+ \sum _{i=1}^{p} \frac{\vert \nabla H(a_{i})\vert }{\lambda_{i}} +\sum _{j \neq i} \varepsilon_{ij} \Biggr). $$

This concludes the proof of claim (i) of Proposition 4.2. By the construction, \(W_{1}\) is bounded, and the maximum of \(\lambda _{i}(s)\), \(i=1, \ldots, p\), decreases along the flow lines of \(W_{1}\). Claim (ii) of Proposition 4.2 follows (as in the Appendix 2 of [14]) from (i) and the fact that \(\Vert \bar{v}\Vert ^{2}\) is small with respect to the absolute value of the upper bound of claim (i) (see Prop. 2.4 of [10], which is valid for any \(\beta>1\)). This completes the proof of Proposition 4.2. □

In the following, we characterize the critical point at infinity in \(V(1, \varepsilon)\).

Theorem 4.9

Let H be a positive \(C^{1}\)-function on \(S^{n-1}\), \(n\geq3\), satisfying \((f)_{\beta}\)-condition. There exists \(\eta>0\) such that if

$$1< \beta< (n-1)+\eta, $$

then the only critical points at infinity of J in \(V(1, \varepsilon )\) are

$$(y)_{\infty}:=\frac{1}{H(y)^{\frac{n-2}{2}}} {\delta}_{(y, \infty)}, \quad y \in \mathcal {K}^{+}. $$

The Morse index of \((y)_{\infty}\) is equal to \(i(y)_{\infty} :=(n-1)-\widetilde{i}(y)\).

Proof

Let \(u= \alpha_{1}\delta_{(a_{1}, \lambda_{1})}\in V(1, \varepsilon)\). We may assume that \(a_{1} \in B(y_{\ell_{1}}, \rho)\), \(y_{\ell_{1}}\in\mathcal {K}\), \(\rho>0\). Using the notation and the result of Corollary 4.8, we obtain

$$\begin{aligned}& \mathrm{(i)} \quad \bigl\langle \partial J(u), Y_{1}(u) \bigr\rangle \leq-c \biggl( \frac{1}{\lambda_{1}^{\beta}}+ \frac{\vert \nabla H(a_{1})\vert }{\lambda_{1}} \biggr), \\& \mathrm{(ii)}\quad \biggl\langle \partial J(u+\overline{v}), Y_{1}(u)+ \frac{\partial\overline{v}}{\partial(\alpha,a,\lambda)} \bigl(W_{2}(u) \bigr) \biggr\rangle \leq -c \biggl( \frac{1}{\lambda_{1}^{\beta}}+ \frac{\vert \nabla H(a_{1})\vert }{\lambda_{1}} \biggr). \end{aligned}$$

In addition, from the construction of \(Y_{1}\) we observe that the Palais-Smale condition is satisfied along each flow line of \(Y_{1}\), until the concentration point of the flow \(a_{1}(s)\) does not enter some neighborhood of y such that \(y\in\mathcal {K}^{+}\) since \(\lambda _{1}(s)\) decreases on the flow line in this set. On the other hand, if \(a_{1}(s)\) is near \(y_{\ell_{1}}, y_{\ell_{1}}\in\mathcal{K}^{+}\), then we observe that \(\lambda_{1}(s)\) increases and goes to +∞. Thus, we obtain a critical point at infinity. In this region, the functional J can be expanded after a suitable change of variables as

$$\begin{aligned} J(\alpha_{1} \delta_{(a_{1}, \lambda_{1})} + \bar{v}) =& J( \widetilde{\alpha_{1} }\delta_{(\widetilde{a_{1}}, \widetilde {\lambda_{1}})}) \\ =& \frac{S_{n}}{\widetilde{\alpha_{1}}^{\frac {4}{n-2}}H(\widetilde{a_{1}})^{\frac{n-2}{2}}} \biggl(1+ \frac{(-\sum_{k=1}^{n-1}b_{k})}{\widetilde{\lambda_{1}}^{\beta}} \biggr). \end{aligned}$$

Thus, the index of such critical point at infinity is \(n-1-\widetilde{i}(y)\). Since J behaves in this region as \(\frac {1}{H^{\frac{n-2}{2}}}\), this finishes the proof of Theorem 4.9. □

The next proposition is extracted from [10], Lemma 4.4. As mentioned in [10], it is still correct for any \(\beta>\frac{n-2}{2}\).

Proposition 4.10

Let w be a solution of (1.1). Assume that the function H satisfies condition \((f)_{\beta}\) with \(\beta>\frac{n-2}{2}\). Then, for each \(p \in\mathbb{N}^{\star}\), there is no critical point at infinity in \(V(p, \varepsilon, w)\).

5 Proof of the existence results

5.1 Proof of Theorem 1.1

By Theorems 4.1 and 4.9 there exists positive η such that if the order of flatness \(\beta(y)\) of any critical point y of H lies in \((n-2, n-1+\eta)\), then the only critical points at infinity are \((y)_{\infty}:=\frac{1}{H(y)^{\frac{n-2}{2}}} {\delta}_{(y, \infty)}\), \(y \in\mathcal {K}^{+}\). For each \(y\in \mathcal{K}^{+}\), we denote by \(W_{u}^{\infty}(y)_{\infty}\) the unstable manifold of the critical points at infinity \((y)_{\infty}\). Recall that the index \(i(y)_{\infty}\) of \((y)_{\infty}\) is equal to the dimension of \(W_{u}^{\infty}(y)_{\infty}\). Using now the gradient flow of \((-\partial J)\) to deform \(\Sigma^{+}\), by the deformation lemma (see [28]) we get that

$$ \Sigma^{+} \simeq\bigcup_{y\in\mathcal{K}^{+}}W_{u}^{\infty}(y)_{\infty}\cup\bigcup_{w; \partial J(w)=0}W_{u}(w), $$
(5.1)

where ≃ denotes retracts by deformation.

It follows from this deformation retract that problem (1.1) necessarily has a solution w. Otherwise, it would follow from (5.1) that

$$1= \chi \bigl(\Sigma^{+} \bigr)= \sum_{y\in\mathcal{K}^{+}}(-1)^{n-1-\widetilde{i}(y)}, $$

where χ denotes the Euler-Poincaré characteristic, and such an equality contradicts the assumption of Theorem 1.2.

Now, for generic H, it follows from the Sard-Smale theorem that all the solutions of (1.1) are nondegenerate. Thus, we derive from (5.1), taking the Euler-Poincaré characteristics of both sides, that

$$1= \chi \bigl(\Sigma^{+} \bigr)= \sum_{y\in\mathcal{K}^{+}}(-1)^{n-1-\widetilde {i}(y)} + \sum_{w; \partial J(w)=0}(-1)^{i(w)}, $$

where \(i(w)\) is the Morse index of w. It follows then

$$\biggl\vert 1-\sum_{y\in\mathcal{K}^{+}}(-1)^{n-1-\widetilde{i}(y)} \biggr\vert \leq\sharp \bigl\{ w, w>0, \partial J(w)=0 \bigr\} . $$

5.2 Proof of Theorem 1.2

The proof follows from the description of the critical points at infinity given in Theorem 4.9 and the proof of Theorem 1.1 of [10].