Introduction

The Korteweg-de Vries (KdV) and the Kuramoto-Sivashinsky (KS) equations have very different properties because they do not belong to the same class of partial differential equations (PDEs). The first one is a third-order nonlinear dispersive equation

$$\displaystyle\begin{array}{rcl} y_{t} + y_{x} + y_{xxx} + yy_{x} = 0,& &{}\end{array}$$
(1)

and the second one is a fourth-order nonlinear parabolic equation

$$\displaystyle\begin{array}{rcl} u_{t} + u_{xxxx} +\lambda u_{xx} + uu_{x} = 0,& &{}\end{array}$$
(2)

where λ > 0 is called the anti-diffusion parameter. However, they have one important characteristic in common. They are both used to model nonlinear propagation phenomena in the space x-direction when the variable t stands for time. The KdV equation serves as a model for waves propagation in shallow water surfaces (Korteweg and de Vries 1895), and the KS equation models front propagation in reaction-diffusion phenomena including some instability effects (Kuramoto and Tsuzuki 1975; Sivashinsky 1977).

From a control point of view, a new common characteristic arises. Because of the order of the spatial derivatives involved, when studying these equations on a bounded interval [0, L], two boundary conditions have to be imposed at the same point, for instance, at x = L. Thus, we can consider control systems where we control one boundary condition but not all the boundary data at one endpoint of the interval. This configuration is not possible for the classical wave and heat equations where at each extreme, only one boundary condition exists and therefore controlling one or all the boundary data at one point is the same.

The KdV equation being of third order in space, three boundary conditions have to be imposed: one at the left endpoint x = 0 and two at the right endpoint x = L. For the KS equation, four boundary conditions are needed to get a well-posed system, two at each extreme. We will focus on the cases where Dirichlet and Neumann boundary conditions are considered because lack of controllability phenomena appears. This holds for some special values of the length of the interval for the KdV equation and depends on the anti-diffusion coefficient λ for the KS equation.

The particular cases where the lack of controllability occurs can be seen as isolated anomalies. However, those phenomena give us important information on the systems. In particular, any method independent of the value of those constants cannot control or stabilize the system when acting from the corresponding control input where trouble appears. In all of these cases, for both the KdV and the KS equations, the space of uncontrollable states is finite dimensional, and therefore, some methods coming from the control of ordinary differential equations can be applied.

General Definitions

Infinite-dimensional control systems described by PDEs have attracted a lot of attention since the 1970s. In this framework, the state of the control system is given by the solution of an evolution PDE. This solution can be seen as a trajectory in an infinite-dimensional Hilbert space H, for instance, the space of square integrable functions or some Sobolev space. Thus, for any time t, the state belongs to H. Concerning the control input, this is either an internal force distributed in the domain, or a punctual force localized within the domain, or some boundary data as considered in this article. For any time t, the control belongs to a control space U, which can be, for instance, the space of bounded functions. The main control properties to be mentioned in this article are controllability, stability, and stabilization. A control system is said to be exactly controllable if the system can be driven from any initial state to another one in finite time. This kind of properties holds, for instance, for hyperbolic system as the wave equation. The notion of null-controllability means that the system can be driven to the origin from any initial state. The main example for this property is the heat equation, which presents regularizing effects. Even if the initial data is discontinuous, right after t = 0, the solution of the heat equation becomes very smooth, and therefore, it is not possible to impose a discontinuous final state. A system is said to be asymptotically stable if the solutions of the system without any control converge as the time goes to infinity to a stationary solution of the PDE. When this convergence holds with a control depending at each time on the state of the system (feedback control), the system is said to be stabilizable by means of a feedback control law.

All these properties have local versions when a smallness condition for the initial and/or the final state is added. This local character is normally due to the nonlinearity of the system.

The KdV Equation

The classical approach to deal with nonlinearities is first to linearize the system around a given state or trajectory, then to study the linear system and finally to go back to the nonlinear one by means of an inversion argument or a fixed-point theorem. Linearizing (1) around the origin, we get the equation

$$\displaystyle\begin{array}{rcl} y_{t} + y_{x} + y_{xxx} = 0,& &{}\end{array}$$
(3)

which can be studied on a finite interval [0, L] under the following three boundary conditions:

$$\displaystyle\begin{array}{rcl} & & y(t,0) = h_{1}(t),\quad y(t,L) = h_{2}(t),\quad \text{ and } \\ & & \quad y_{x}(t,L) = h_{3}(t). {}\end{array}$$
(4)

Thus, viewing \(h_{1}(t),h_{2}(t),h_{3}(t) \in \mathbb{R}\) as controls and the solution \(y(t,\cdot ) : [0,L] \rightarrow \mathbb{R}\) as the state, we can consider the linear control system (3)–(4) and the nonlinear one (1)–(4).

We will report on the role of each input control when the other two are off. The tools used are mainly the duality controllability-observability, Carleman estimates, the multiplier method, the compactness-uniqueness argument, the backstepping method, and fixed-point theorems. Surprisingly, the control properties of the system depend strongly on the location of the controls.

Theorem 1

The linear KdV system (3)–(4)  is:

  1. 1.

    Null-controllable when controlled from h1(i.e.,\(h_{2} = h_{3} = 0\)) (Glass and Guerrero 2008) .

  2. 2.

    Exactly controllable when controlled from h2(i.e.,\(h_{1} = h_{3} = 0\)) if and only if L does not belong to a set O of critical lengths defined in Glass and Guerrero (2010) .

  3. 3.

    Exactly controllable when controlled from h3(i.e.,\(h_{1} = h_{2} = 0\)) if and only if L does not belong to a set of critical lengths N defined in Rosier (1997) .

  4. 4.

    Asymptotically stable to the origin if L ∉ N and no control is applied (Perla Menzala et al. 2002) .

  5. 5.

    Stabilizable by means of a feedback law using h1only (i.e.,\(h_{2} = h_{3} = 0\)) Cerpa and Coron (2013) .

If LN or LO, one says that L is a critical length since the linear control system (3)–(4) loses controllability properties when only one control input is applied. In those cases, there exists a finite-dimensional subspace of L2(0, L) which is unreachable from 0 for the linear system. The sets N and O contain infinitely many critical lengths, but they are countable sets.

When one is allowed to use more than one boundary control input, there is no critical spatial domain, and the exact controllability holds for any L > 0. This is proved in Zhang (1999) when three boundary controls are used. The case of two control inputs is solved in Rosier (1997), Glass and Guerrero (2010), and Cerpa et al. (2013).

Previous results concern the linearized control system. Considering the nonlinearity yy x , we obtain the original KdV control system and the following results.

Theorem 2

The nonlinear KdV system (1)–(4)  is:

  1. 1.

    Locally null-controllable when controlled from h1(i.e.,\(h_{2} = h_{3} = 0\)) (Glass and Guerrero 2008) .

  2. 2.

    Locally exactly controllable when controlled from h2(i.e.,\(h_{1} = h_{3} = 0\)) if L does not belong to the set O of critical lengths (Glass and Guerrero 2010) .

  3. 3.

    Locally exactly controllable when controlled from h3(i.e.,\(h_{1} = h_{2} = 0\)). If L belongs to the set of critical lengths N, then a minimal time of control may be required (see Cerpa 2014).

  4. 4.

    Asymptotically stable to the origin if L ∉ N and no control is applied (Perla Menzala et al. 2002) .

  5. 5.

    Locally stabilizable by means of a feedback law using h1only (i.e.,\(h_{2} = h_{3} = 0\)) (Cerpa and Coron 2013) .

Item 3 in Theorem 2 is a truly nonlinear result obtained by applying a power series method introduced in Coron and Crépeau (2004). All other items are implied by perturbation arguments based on the linear control system. The related control system formed by (1) with boundary controls

$$\displaystyle\begin{array}{rcl} & & y(t,0) = h_{1}(t),\quad y_{x}(t,L) = h_{2}(t),\quad \text{ and } \\ & & \quad y_{xx}(t,L) = h_{3}(t), {}\end{array}$$
(5)

is studied in Cerpa et al. (2013), and the same phenomenon of critical lengths appears.

The KS Equation

Applying the same strategy than for KdV, we linearize (2) around the origin to get the equation

$$\displaystyle\begin{array}{rcl} u_{t} + u_{xxxx} +\lambda u_{xx} = 0,& &{}\end{array}$$
(6)

which can be studied on the finite interval [0, 1] under the following four boundary conditions:

$$\displaystyle\begin{array}{rcl} & & u(t,0) = v_{1}(t),\quad u_{x}(t,0) = v_{2}(t), \\ & & \quad u(t,1) = v_{3}(t),\quad \text{ and }\quad u_{x}(t,1) = v_{4}(t).{}\end{array}$$
(7)

Thus, viewing \(v_{1}(t),v_{2}(t),v_{3}(t),v_{4}(t) \in \mathbb{R}\) as controls and the solution \(u(t,\cdot ) : [0,1] \rightarrow \mathbb{R}\) as the state, we can consider the linear control system (6)–(7) and the nonlinear one (2)–(7). The role of the parameter λ is crucial. The KS equation is parabolic and the eigenvalues of system (6)–(7) with no control (\(v_{1} = v_{2} = v_{3} = v_{4} = 0\)) go to −. If λ increases, then the eigenvalues move to the right. When λ > 4π2, the system becomes unstable because there are a finite number of positive eigenvalues. In this unstable regime, the system loses control properties for some values of λ.

Theorem 3

The linear KS control system (6)–(7) is:

  1. 1.

    Null-controllable when controlled from v1and v2(i.e.,\(v_{3} = v_{4} = 0\)). The same is true when controlling v3and v4(i.e.,\(v_{1} = v_{2} = 0\)) (Cerpa and Mercado 2011; Lin Guo 2002) .

  2. 2.

    Null-controllable when controlled from v2(i.e.,\(v_{1} = v_{2} = v_{3} = 0\)) if and only if λ does not belong to a countable set M defined in Cerpa (2010) .

  3. 3.

    Asymptotically stable to the origin if λ < 4π2and no control is applied (Liu and Krstic 2001) .

  4. 4.

    Stabilizable by means of a feedback law using v2only (i.e.,\(v_{2} = v_{3} = v_{4} = 0\)) if and only if λ ∉ M (Cerpa 2010) .

In the critical case λM, the linear system is not null-controllable anymore if we control v2 only (item 2 in Theorem 3). The space of noncontrollable states is finite dimensional. To obtain the null-controllability of the linear system in these cases, we have to add another control. Controlling with v2 and v4 does not improve the situation in the critical cases. Unlike that, the system becomes null-controllable if we can act on v1 and v2. This result with two input controls has been proved in Lin Guo (2002) for the case λ = 0 and in Cerpa and Mercado (2011) in the general case (item 1 in Theorem 3).

It is known from Liu and Krstic (2001) that if λ < 4π2, then the system is exponentially stable in L2(0, 1). On the other hand, if λ = 4π2, then zero becomes an eigenvalue of the system, and therefore the asymptotic stability fails. When λ > 4π2, the system has positive eigenvalues and becomes unstable. In order to stabilize this system, a finite-dimensional-based feedback law can be designed by using the pole placement method (item 4 in Theorem 3).

Previous results concern the linearized control system. If we add the nonlinearity uu x , we obtain the original KS control system and the following results.

Theorem 4

The KS control system (2)–(7) is:

  1. 1.

    Locally null-controllable when controlled from v1and v2(i.e.,\(v_{3} = v_{4} = 0\)). The same is true when controlling v3and v4(i.e.,\(v_{1} = v_{2} = 0\)) (Cerpa and Mercado 2011) .

  2. 2.

    Asymptotically stable to the origin if λ < 4π2and no control is applied (Liu and Krstic 2001) .

There are less results for the nonlinear systems than for the linear one. This is due to the fact that the spectral techniques used to study the linear system with only one control input are not robust enough to deal with perturbations in order to address the nonlinear control system.

Summary and Future Directions

The KdV and the KS equations possess both noncontrol results when one boundary control input is applied. This is due to the fact that both are higher-order equations, and therefore, when posed on a bounded interval, more than one boundary condition should be imposed at the same point. The KdV equation is exactly controllable when acting from the right and null-controllable when acting from the left. On the other hand, the KS equation, being parabolic as the heat equation, is not exactly controllable but null-controllable. Most of the results are implied by the behaviors of the corresponding linear system, which are very well understood.

For the KdV equation, the main directions to investigate at this moment are the controllability and the stability for the nonlinear equation in critical domains. Among others, some questions concerning controllability, minimal time of control, and decay rates for the stability are open. Regarding the KS equation, there are few results for the nonlinear system with one control input even if we are not in a critical value of the anti-diffusion parameter. In the critical cases, the controllability and stability issues are wide open.

In general, for PDEs, there are few results about delay phenomena, output feedback laws, adaptive control, and other classical questions in control theory. The existing results on these topics mainly concern the more popular heat and wave equations. As KdV and KS equations are one dimensional in space, many mathematical tools are available to tackle those problems. For all that, to our opinion, the KdV and KS equations are excellent candidates to continue investigating these control properties in a PDE framework.

Cross-References

Recommended Reading

The book Coron (2007) is a very good reference to study the control of PDEs. In Cerpa (2014), a tutorial presentation of the KdV control system is given. Control system for PDEs with boundary conditions and internal controls is considered in Rosier and Zhang (2009) and the references therein for the KdV equation and in Armaou and Christofides (2000) and Christofides and Armaou (2000) for the KS equation. Control topics as delay and adaptive control are studied in the framework of PDEs in Krstic (2009) and Smyshlyaev and Krstic (2010), respectively.