Global optimization in Hilbert space
 1k Downloads
 1 Citations
Abstract
We propose a completesearch algorithm for solving a class of nonconvex, possibly infinitedimensional, optimization problems to global optimality. We assume that the optimization variables are in a bounded subset of a Hilbert space, and we determine worstcase runtime bounds for the algorithm under certain regularity conditions of the cost functional and the constraint set. Because these runtime bounds are independent of the number of optimization variables and, in particular, are valid for optimization problems with infinitely many optimization variables, we prove that the algorithm converges to an \(\varepsilon \)suboptimal global solution within finite runtime for any given termination tolerance \(\varepsilon > 0\). Finally, we illustrate these results for a problem of calculus of variations.
Keywords
Infinitedimensional optimization Complete search Branchandlift Convergence analysis Complexity analysisMathematics Subject Classification
49M30 65K10 90C26 93B401 Introduction
Infinitedimensional optimization problems arise in many research fields, including optimal control [7, 8, 24, 54], optimization with partial differential equations (PDE) embedded [22], and shape/topology optimization [5]. In practice, these problems are often solved approximately by applying discretization techniques; the original infinitedimensional problem is replaced by a finitedimensional approximation that can then be tackled using standard optimization techniques. However, the resulting discretized optimization problems may comprise a large number of optimization variables, which grows unbounded as the accuracy of the approximation is refined. Unfortunately, worstcase runtime bounds for completesearch algorithms in nonlinear programming (NLP) scale poorly with the number of optimization variables. For instance, the worstcase runtime of spatial branchandbound [17, 44] scales exponentially with the number of optimization variables. In contrast, algorithms for solving convex optimization problems in polynomial runtime are known [11, 40], e.g. in linear programming (LP) or convex quadratic programming (QP). While these efficient algorithms enable the solution of very largescale convex optimization problems, such as structured or sparse problems, in general their worstcase runtime bounds also grow unbounded as the number of decision variables tends to infinity.
Existing theory and algorithms that directly analyze and exploit the infinitedimensional nature of an optimization problem are mainly found in the field of convex optimization. For the most part, these algorithms rely on duality in convex optimization in order to construct upper and lower bounds on the optimal solution value, although establishing strong duality in infinitedimensional problems can prove difficult. In this context, infinitedimensional linear programming problems have been analyzed thoroughly [3]. A variety of algorithms are also available for dealing with convex infinitedimensional optimization problems, some of which have been analyzed in generic Banach spaces [14], as well as certain tailored algorithms for continuous linear programming [4, 13, 32].
In the field of nonconvex optimization, problems with an infinite number of variables are typically studied in a local neighborhood of a stationary point. For instance, local optimality in continuoustime optimal control problems can be analyzed by using Pontryagin’s maximum principle [46], and a number of local optimal control algorithms are based on this analysis [6, 12, 51, 54]. More generally, approaches in the classical field of variational analysis [37] rely on local analysis concepts, from which information about global extrema may not be derived in general. In fact, nonconvex infinitedimensional optimization remains an open field of research and, to the best of our knowledge, there currently are no generic completesearch algorithms for solving such problems to global optimality.
This paper asks the question whether a global optimization algorithm can be constructed, whose worstcase runtime complexity is independent of the number of optimization variables thereof, such that this algorithm would remain tractable for infinitedimensional optimization problems. Clearly, devising such an algorithm may only be possible for a certain class of optimization problems. Interestingly, the fact that the “complexity” or “hardness” of an optimization problem does not necessarily depend on the number of optimization variables has been observed—and it is in fact exploited—in stateoftheart global optimization solvers for NLP/MINLP, although these observations are still to be analyzed in full detail. For instance, instead of applying a branchandbound algorithm in the original space of optimization variables, global NLP/MINLP solvers such as BARON [49, 52] or ANTIGONE [34] proceed by lifting the problem to a higherdimensional space via the introduction of auxiliary variables from the DAG decomposition of the objective and constraint functions. In a different context, the solution of a lifted problem in a higherdimensional space has become popular in numerical optimal control, where the socalled multipleshooting methods often outperform their singleshooting counterparts despite the fact that the former calls for the solution a largerscale (discretized) NLP problem [7, 8]. This idea that certain optimization problems become easier to solve than equivalent problems in fewer variables is also central to the work on lifted Newton methods [2]. To the best of our knowledge, such behaviors cannot be explained currently with results from the field of complexity analysis, which typically give monotonically increasing worstcase runtime bounds as the number of optimization variables increases. In this respect, these runtime bounds therefore predict the opposite behavior to what can sometimes be observed in practice.
1.1 Problem formulation
Definition 1
We make the following assumptions regarding the geometry of C throughout the paper.
Assumption 1
Our main objective in this paper is to develop an algorithm that can locate an \(\varepsilon \)suboptimal global optimum of Problem (1), in finite runtime for any given accuracy \(\varepsilon >0\), provided that F satisfies certain regularity conditions alongside Assumption 1.
Remark 1
 1.
the Hilbert space \(H\subseteq B\) is convex and dense in \((B,\Vert \cdot \Vert )\);
 2.
the function \(\hat{F}\) is upper semicontinuous in \(\hat{C}\); and
 3.
the constraint set \(\hat{C}\) has a nonempty relative interior;
1.2 Outline and contributions
The paper starts by discussing several regularity conditions for sets and functionals defined in a Hilbert space in Sect. 2, based on which completesearch algorithms can be constructed whose runtime is independent of the number of optimization variables. Such an algorithm is presented in Sect. 3 and analyzed in Sect. 4, which constitutes the main contributions and novelty. A numerical case study is presented in Sect. 5 in order to illustrate the main results, before concluding the paper in Sect. 6.
Although certain of these algorithmic ideas are inspired by a recent paper on global optimal control [25], we develop herein a much more general framework for optimization in Hilbert space. Besides, Sect. 4 derives novel worstcase complexity estimates for the proposed algorithm. We argue that these ideas could help lay the foundations towards new ways of analyzing the complexity of certain optimization problems based on their structural properties rather than their number of optimization variables. Although the runtime estimates for the proposed algorithm remain conservative, they indicate that complexity in numerical optimization does not necessarily depend on whether the problem at hand being smallscale, largescale, or even infinitedimensional.
2 Some regularity conditions for sets and functionals in Hilbert space
Assumption 2
The basis functions \(\varPhi _k\) are uniformly bounded with respect to \(\Vert \cdot \Vert _H\).
Definition 2
Lemma 1
Under Assumption 1, the function \(\overline{D}_C:\mathbb N \rightarrow \mathbb R\) is uniformly bounded from above by \(\gamma \).
Proof
Despite being uniformly bounded, the function \(\overline{D}_C(M)\) may not converge to zero as \(M \rightarrow \infty \) in an infinitedimensional Hilbert space in general. Such lack of convergence is illustrated in the following example.
Example 1
A principal aim of the following sections is to develop an optimization algorithm, whose convergence to an \(\varepsilon \)global optimum of Problem (1) can be certified. But instead of making assumptions about the existence, or even the regularity, of the minimizers of Problem (1), we shall impose suitable regularity conditions on the objective function F in (1). In preparation for this analysis, we start by formalizing a particular notion of regularity for the elements of H.
Definition 3
Theorem 1
Proof
The following example establishes the regularity of piecewise smooth functions with a finite number of singularities in the Hilbert space of squareintegrable functions with the Legendre polynomials as orthogonal basis functions.
Example 2
We consider the Hilbert space \(H = L_2[0,1]\) of standard squareintegrable functions on the interval [0, 1] equipped with the standard inner product, \(\langle f,g\rangle :=\int _0^1 f(s)g(s)\mathrm{d}s\), and we choose the Legendre polynomials on the interval [0, 1] with weighting factors \(\sigma _k = \frac{1}{2k+1}\) as orthogonal basis functions \((\varPhi _k)_{k \in \mathbb N}\). Our focus is on piecewise smooth functions \(g: [0,1] \rightarrow \mathbb R\) with a given finite number of singularities, for which we want to establish regularity in the sense of Definition 3 for a bounded constraint set \(C\subset L_2[0,1]\).
A useful generalization of Definition 3 and a corollary of Theorem 1 are given below.
Definition 4
Corollary 1
Remark 2
Example 2
In the remainder of this section, we analyze and illustrate a regularity condition for the cost functional in Problem (1).
Definition 5
Remark 3
Remark 4
The following two examples investigate strong Lipschitz continuity for certain classes of functionals in the practical space of squareintegrable functions with the Legendre polynomials as orthogonal basis functions. The first one (Example 3) illustrates the case of a functional that is not strongly Lipschitzcontinuous; the second one (Example 4) identifies a broad class of strongly Lipschitzcontinuous functionals defined via the solution of an embedded ODE system. The intention here is to help the reader develop an intuition that strongly Lipschitzcontinuous functionals occur naturally in many, although not all, problems of practical relevance.
Example 3
Remark 5
The result that the functional F in Example 3 is not strongly Lipschitzcontinuous on C is not in contradiction with Remark 4. Although F is Fréchet differentiable in \(L_2[0,1]\), the corresponding set G of the Fréchet derivatives of F is indeed unbounded.
Example 4
Remark 6
We close this section with a brief analysis of the relationship between strong and classical Lipschitzness in infinitedimensional Hilbert space.
Lemma 2
Every strongly Lipschitzcontinuous functional \(F:H\rightarrow \mathbb R\) on C is also Lipschitzcontinuous on C.
Proof
Remark 7
Remark 8
3 Global optimization in Hilbert space using complete search
The application of completesearch strategies to infinitedimensional optimization problems such as (1) calls for an extension of the (spatial) branchandbound principle [23] to general Hilbert space. The approach presented in this section differs from branchandbound in that the dimension M of the search space is adjusted, as necessary, during the iterations of the algorithm, by using a socalled lifting operation—hence the name branchandlift algorithm. The basic idea is to bracket the optimal solution value of Problem (1) and progressively refine these bounds via this lifting mechanism, combined with traditional branching and fathoming.
Based on the developments in Sect. 2, the following subsections describe methods for exhaustive partitioning in infinitedimensional Hilbert space (Sect. 3.1) and for computing rigorous upper and lower bounds on given subsets of the variable domain (Sect. 3.2), before presenting the proposed branchandlift algorithm (Sect. 3.3).
3.1 Partitioning in infinitedimensional Hilbert space
Similar to branchandbound search, the proposed branchandlift algorithm maintains a partition \(\mathscr {A} := \{A_1,\ldots ,A_k\}\) of finitedimensional sets \(A_1,\ldots ,A_k\). This partition is updated through the repeated application of certain operations, including branching and lifting, in order to close the gap between an upper and a lower bound on the global solution value of the optimization problem (1). The following definition is useful in order to formalize these operations:
Definition 6
Notice that each subregion \(X_M(A)\) is a convex set if the sets C and A are themselves convex. For practical reasons, we restrict ourselves to compact subsets \(A \in \mathbb S^{M+1} \subseteq \mathscr {P}( \mathbb R^{M+1})\) herein, where the class of sets \(\mathbb S^{M+1}\) is easily stored and manipulated by a computer. For example, \(\mathbb S^{M+1}\) could be a set of interval boxes, polytopes, ellipsoids, etc.
A number of remarks are in order:
Remark 9
The idea to introduce a lifting operation to enable partition in infinitedimensional function space was originally introduced by the authors in a recently publication [25], focusing on global optimization of optimal control problems. One principal contribution in the present paper is a generalization of these ideas to global optimization in any Hilbert space, by identifying a set of sufficient regularity conditions on the cost functional and constraint set for the resulting branchandlift algorithms to converge to an \(\varepsilon \)global solution in finite runtime.
Remark 10
Many recent optimization techniques for global optimization are based on the theory of positive polynomials and their associated linear matrix inequality (LMI) approximations [30, 45], which are also originally inspired by moment problems. Although these LMI techniques may be applied in the practical implementation of the aforementioned lifting operation, they are not directly related to the branchandlift algorithm that is developed in the following sections. An important motivation for moving away from the generic LMI framework is that the available implementations scale quite poorly with the number of optimization variables, due to the combinatorial increase of the number of monomials in the associated multivariate polynomial. Therefore, a direct approximation of the cost function F with multivariate polynomials would conflict with our primary objective to develop a global optimization algorithm whose worstcase runtime does not depend on the number of optimization variables.
3.2 Strategies for upper and lower bounding of functionals
 1.Compute bounds \(L^0_M(A)\) and \(U^0_M(A)\) on the finitedimensional approximation of F asClearly, it depends on the particular expression of F how to determine such bounds in practice. In the case that F is factorable, various arithmetics can be used to propagate bounds through a DAG of the function, including interval arithmetic [36], McCormick relaxations [9, 33], and Taylor/Chebyshev model arithmetic [10, 43, 47]. Moreover, if the expression of F is embedding a dynamic system described by differential equations, validated bounds can be obtained by using a variety of setpropagation techniques as described, e.g., in [26, 31, 38, 50, 53]; or via hierarchies of LMI relaxations as in [21, 29].$$\begin{aligned} \forall A \in \mathbb S^{M+1}, \quad L^0_M(A) \; \le \; \inf _{a \in A} \, F \left( \sum _{i=0}^M a_i \varPhi _i \right) \; \le \; U^0_M(A) \, . \end{aligned}$$(16)
 2.Compute a bound \(\varDelta _M(A)\) on the approximation errors such thatIn the case that F is strongly Lipschitzcontinuous on C, we can always take \(\varDelta _M(A) := L \, \overline{R}_C(M,G)\), where the constant \(L<\infty \) and the bounded regular set G satisfy the condition (7). Naturally, better bounds may be derived by exploiting a particular structure or expression of F.$$\begin{aligned} \forall A \in \mathbb S^{M+1}, \quad&\left \inf _{x \in X_M(A)} \, F(x)  \inf _{a \in A} \, F \left( \sum _{i=0}^M a_i \varPhi _i \right) \right \;\le \; \varDelta _M(A) \, . \end{aligned}$$(17)
We state the following assumptions in anticipation of the convergence analysis in Sect. 4.
Assumption 3
The cost functional F in Problem (1) is strongly Lipschitzcontinuous on C, with the condition (7) holding for the constant \(L<\infty \) and the bounded regular subset \(G\subset H\).
Remark 11
Remark 12
3.3 Branchandlift algorithm
 Regarding initialization, the branchandlift iterations starts with \(M = 0\). A possible way of initializing the partition \(\mathscr {A} = \left\{ A_0 \right\} \) is by noting thatunder Assumption 1.$$\begin{aligned} \{ \langle x, \varPhi _0 \rangle \mid \, x \in C \, \} \; \subseteq \; \left[ \frac{\gamma }{\sigma _0}, \frac{\gamma }{\sigma _0}\right] \end{aligned}$$
 Besides the branching and lifting operations introduced earlier in Sect. 3.1, fathoming in Step 4 of Algorithm 1 refers to the process of discarding a given set \(A\in \mathscr {A}\) from the partition if$$\begin{aligned} \quad L_M(A) \, = \, \infty \quad \text {or} \quad \exists \, A' \in \mathscr {A}: \; \; L_M(A) \, > \, U_{M}(A') \, . \end{aligned}$$
 The main idea behind the lifting condition defined in Step 6 of Algorithm 1, namelyis that a subset A should be lifted to a higherdimensional space whenever the approximation error \(\varDelta _M(A)\) due to the finite parameterization becomes of the same order of magnitude as the current optimality gap \(U_{M}(A)L_{M}(A)\). The aim here is to apply as few lifts as possible, since it is preferable to branch in a lower dimensional space. The convergence of the branchandlift algorithm under this lifting condition is examined in Sect. 4 below. Notice also that a lifting operation is applied globally—that is, to all parameter subsets in the partition \(\mathscr {A}\)–in Algorithm 1, so all the subsets in \(\mathscr {A}\) share the same parameterization order at any iteration. In a variant of Algorithm 1, one could also imagine a family of subsets that would have different parameterization orders by applying the lifting condition locally instead.$$\begin{aligned} \forall A\in \mathscr {A}, \quad U_{M}(A)L_{M}(A) \; \le \; 2(1+\rho ) \varDelta _M(A)\,, \end{aligned}$$(19)

Finally, it will be established in the following section that, upon termination and under certain assumptions, Algorithm 1 returns an \(\varepsilon \)suboptimal solution of Problem (1). In particular, Assumption 1 rules out the possibility of an infeasible solution.
4 Convergence analysis of branchandlift
This section investigates the convergence properties of the branchandlift algorithm (Algorithm 1) developed previously. It is convenient to introduce the following notation in order to conduct the analysis:
Definition 7
The following result is a direct consequence of the lifting condition (19) in the branchandlift algorithm:
Lemma 3
Proof
Besides having a finite number of lifting operations, the convergence of Algorithm 1 can be established if the elements of a partition can be made arbitrarily small after applying a finite number of subdivisions.
Definition 8
The following theorem provides the main convergence result for the proposed branchandlift algorithm.
Theorem 2
Proof
Remark 13

the bound \(\gamma \) on the constraint set C;

the Lipschitz constants K and L of the cost functional F;

the uniform bound \(\sup _k \Vert \varPhi _k \Vert _H\) and the scaling factors \(\sigma _k\) of the chosen orthogonal functions \(\varPhi _k\); and

the lifting parameter \(\rho \) and the termination tolerance \(\varepsilon \) in Algorithm 1.
Example 5
5 Numerical case study
6 Conclusions
This paper has presented a completesearch algorithm, called branchandlift, for global optimization of problems with a nonconvex cost functional and a bounded and convex constraint sets defined on a Hilbert space. A key contribution is the determination of runtime complexity bounds for branchandlift that are independent of the number of variables in the optimization problem, provided that the cost functional is strongly Lipschitzcontinuous with respect to a regular and bounded subset of that Hilbert space. The corresponding convergence conditions are satisfied for a large class of practically relevant problems in calculus of variations and optimal control. In particular, the complexity analysis in this paper implies that branchandlift can be applied to solve potentially nonconvex and infinitedimensional optimization problems without needing apriori knowledge about the existence or regularity of minimizers, as the runtime bounds solely depend on the structural and regularity properties of the cost functional as well as the underlying Hilbert space and the geometry of the constraint set. This could pave the way for a new complexity analysis of optimization problems, whereby the “complexity” or “hardness” of a problem does not necessarily depend on their number of optimization variables. In order to demonstrate that these algorithmic ideas and complexity analysis are not of pure theoretical interest only, the practical applicability of branchandlift has been illustrated with a numerical case study for a problem of calculus of variations. The case study of an optimal control problem in [25] provides another illustration.
Footnotes
Notes
Acknowledgements
This paper is based upon work supported by the Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/J006572/1, National Natural Science Foundation of China (NSFC) under Grant 61473185, and ShanghaiTech University under Grant F020314012. Financial support from Marie Curie Career Integration Grant PCIG09GA2011293953 and from the Centre of Process Systems Engineering (CPSE) of Imperial College is gratefully acknowledged. The authors would like to thank CoEditor Dr. Sven Leyffer for his constructive comments about minimality of assumptions for the convergence of branchandlift.
References
 1.Akhiezer, N.I.: The Classical Moment Problem and Some Related Questions in Analysis. Translated by N. Kemmer. Hafner Publishing Co., New York (1965)zbMATHGoogle Scholar
 2.Albersmeyer, J., Diehl, M.: The lifted Newton method and its application in optimization. SIAM J. Optim. 20(3), 1655–1684 (2010)MathSciNetzbMATHGoogle Scholar
 3.Anderson, E.J., Nash, P.: Linear Programming in InfiniteDimensional Spaces. Wiley, Hoboken (1987)zbMATHGoogle Scholar
 4.Bampou, D., Kuhn, D.: Polynomial approximations for continuous linear programs. SIAM J. Optim. 22(2), 628–648 (2012)MathSciNetzbMATHGoogle Scholar
 5.Bendsøe, M.P., Sigmund, O.: Topology Optimization: Theory, Methods, and Applications. Springer, Berlin (2004)zbMATHGoogle Scholar
 6.Betts, J.T.: Practical Methods for Optimal Control Using Nonlinear Programming. Advances in Design and Control Series, 2nd edn. SIAM, Philadelphia (2010)zbMATHGoogle Scholar
 7.Biegler, L.T.: Solution of dynamic optimization problems by successive quadratic programming and orthogonal collocation. Comput. Chem. Eng. 8, 243–248 (1984)Google Scholar
 8.Bock, H.G., Plitt, K.J.: A multiple shooting algorithm for direct solution of optimal control problems. In: Proceedings 9th IFAC World Congress Budapest, pp. 243–247. Pergamon Press, Oxford (1984)Google Scholar
 9.Bompadre, A., Mitsos, A.: Convergence rate of McCormick relaxations. J. Glob. Optim. 52(1), 1–28 (2012)MathSciNetzbMATHGoogle Scholar
 10.Bompadre, A., Mitsos, A., Chachuat, B.: Convergence analysis of Taylor and McCormickTaylor models. J. Glob. Optim. 57(1), 75–114 (2013)MathSciNetzbMATHGoogle Scholar
 11.Boyd, S., Vandenberghe, L.: Convex Optimization. University Press, Cambridge (2004)zbMATHGoogle Scholar
 12.Bryson, A.E., Ho, Y.: Applied Optimal Control. Hemisphere, Washington (1975)Google Scholar
 13.Buie, R., Abrham, J.: Numerical solutions to continuous linear programming problems. Z. Oper. Res. 17(3), 107–117 (1973)MathSciNetzbMATHGoogle Scholar
 14.Devolder, O., Glineur, F., Nesterov, Y.: Solving infinitedimensional optimization problems by polynomial approximation. In: Diehl, M., Glineur, F., Jarlebring, E., Michiels, W. (eds.) Recent Advances in Optimization and its Applications in Engineering, pp. 31–40. Springer, Berlin Heidelberg (2010)Google Scholar
 15.Ditzian, Z., Totik, V.: Moduli of Smoothness. Springer, Berlin (1987)zbMATHGoogle Scholar
 16.Driscoll, T.A., Hale, N., Trefethen, L.N.: Chebfun Guide. Pafnuty Publications, Oxford (2014)Google Scholar
 17.Floudas, C.A.: Deterministic Global Optimization: Theory, Methods, and Applications. Kluwer, Dordrecht (1999)Google Scholar
 18.Goemans, M.X., Williamson, D.P.: Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 42(6), 1115–1145 (1995)MathSciNetzbMATHGoogle Scholar
 19.Gottlieb, D., Shu, C.W.: On the Gibbs phenomenon and its resolution. SIAM Rev. 39(4), 644–668 (1997)MathSciNetzbMATHGoogle Scholar
 20.Henrion, D., Tarbouriech, S., Arzelier, D.: LMI approximations for the radius of the intersection of ellipsoids: a survey. J. Optim. Theory Appl. 108(1), 1–28 (2001)MathSciNetzbMATHGoogle Scholar
 21.Henrion, D., Korda, M.: Convex computation of the region of attraction of polynomial control systems. IEEE Trans. Autom. Control 59(2), 297–312 (2014)MathSciNetzbMATHGoogle Scholar
 22.Hinze, M., Pinnau, R., Ulbrich, M., Ulbrich, S.: Optimization with PDE Constraints. Springer, Berlin (2009)zbMATHGoogle Scholar
 23.Horst, R., Tuy, H.: Global Optimization: Deterministic Approaches, 3rd edn. Springer, Berlin, Germany (1996)zbMATHGoogle Scholar
 24.Houska, B., Ferreau, H.J., Diehl, M.: ACADO toolkit–an opensource framework for automatic control and dynamic optimization. Optim. Control Appl. Methods 32, 298–312 (2011)MathSciNetzbMATHGoogle Scholar
 25.Houska, B., Chachuat, B.: Branchandlift algorithm for deterministic global optimization in nonlinear optimal control. J. Optim. Theory Appl. 162(1), 208–248 (2014)MathSciNetzbMATHGoogle Scholar
 26.Houska, B., Villanueva, M.E., Chachuat, B.: Stable setvalued integration of nonlinear dynamic systems using affine set parameterizations. SIAM J. Numer. Anal. 53(5), 2307–2328 (2015)MathSciNetzbMATHGoogle Scholar
 27.Jackson, D.: The Theory of Approximation, vol. XI. AMS Colloquium Publication, New York (1930)zbMATHGoogle Scholar
 28.Katznelson, Y.: An Introduction to Harmonic Analysis, 2nd edn. Dover Publications, New York (1976)zbMATHGoogle Scholar
 29.Korda, M., Henrion, D., Jones, C.N.: Convex computation of the maximum controlled invariant set for polynomial control systems. SIAM J. Control Optim. 52(5), 2944–2969 (2014)MathSciNetzbMATHGoogle Scholar
 30.Lasserre, J.B.: Moments, Positive Polynomials and Their Applications. Imperial College Press, London (2009)Google Scholar
 31.Lin, Y., Stadtherr, M.A.: Validated solutions of initial value problems for parametric ODEs. Appl. Numer. Math. 57(10), 1145–1162 (2007)MathSciNetzbMATHGoogle Scholar
 32.Luo, X., Bertsimas, D.: A new algorithm for stateconstrained separated continuous linear programs. SIAM J. Control Optim. 37, 177–210 (1998)MathSciNetzbMATHGoogle Scholar
 33.McCormick, G.P.: Computability of global solutions to factorable nonconvex programs: part Iconvex underestimating problems. Math. Program. 10, 147–175 (1976)zbMATHGoogle Scholar
 34.Misener, R., Floudas, C.A.: ANTIGONE: algorithms for continuous/integer global optimization of nonlinear equations. J. Glob. Optim. 59(2–3), 503–526 (2014)MathSciNetzbMATHGoogle Scholar
 35.Mitsos, A., Chachuat, B., Barton, P.I.: McCormick based relaxations of algorithms. SIAM J. Optim. 20, 573–601 (2009)MathSciNetzbMATHGoogle Scholar
 36.Moore, R.E.: Methods and Applications of Interval Analysis. SIAM Studies in Applied Mathematics. SIAM, Philadelphia (1979)zbMATHGoogle Scholar
 37.Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation I: Basic Theory. Springer, Berlin (2006)Google Scholar
 38.Neher, M., Jackson, K.R., Nedialkov, N.S.: On Taylor model based integration of ODEs. SIAM J. Numer. Anal. 45, 236–262 (2007)MathSciNetzbMATHGoogle Scholar
 39.Nemirovski, A., Roos, C., Terlaky, T.: On maximization of quadratic form over intersection of ellipsoids with common center. Math. Program. 86(3), 463–473 (1999)MathSciNetzbMATHGoogle Scholar
 40.Nesterov, Y., Nemirovskii, A.: InteriorPoint Polynomial Methods in Convex Programming. SIAM, Philadelphia (1994)zbMATHGoogle Scholar
 41.Nesterov, Y.: Semidefinite relaxation and nonconvex quadratic optimization. Optim. Methods Softw. 12, 1–20 (1997)Google Scholar
 42.Nesterov, Y.: Squared functional systems and optimization problems. In: Frenk, H., Roos, K., Terlaky, T., Zhang, S. (eds.) High Performance Optimization, pp. 405–440. Kluwer Academic Publishers, Dordrecht (2000)Google Scholar
 43.Neumaier, A.: Taylor forms—use and limits. Reliab. Comput. 9(1), 43–79 (2002)MathSciNetzbMATHGoogle Scholar
 44.Neumaier, A.: Complete search in continuous global optimization and constraint satisfaction. Acta Numer. 13, 271–369 (2004)MathSciNetzbMATHGoogle Scholar
 45.Parrilo, P.A.: Polynomial games and sum of squares optimization. In: Proceedings of the 45th IEEE Conference on Decision & Control, pp. 2855–2860. San Diego (CA) (2006)Google Scholar
 46.Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The Mathematical Theory of Optimal Processes. Wiley, New York (1962)Google Scholar
 47.Rajyaguru, J., Villanueva, M.E., Houska, B., Chachuat, B.: Chebyshev model arithmetic for factorable functions. J. Glob. Optim. 68(2), 413–438 (2017)MathSciNetzbMATHGoogle Scholar
 48.Saff, E.B., Totik, V.: Polynomial approximation of piecewise analytic functions. J. Lond. Math. Soc. 39(2), 487–498 (1989)MathSciNetzbMATHGoogle Scholar
 49.Sahinidis, N.V.: A general purpose global optimization software package. J. Glob. Optim. 8(2), 201–205 (1996)MathSciNetzbMATHGoogle Scholar
 50.Scott, J.K., Chachuat, B., Barton, P.I.: Nonlinear convex and concave relaxations for the solutions of parametric ODEs. Optim. Control Appl. Methods 34(2), 145–163 (2013)MathSciNetzbMATHGoogle Scholar
 51.von Stryk, O., Bulirsch, R.: Direct and indirect methods for trajectory optimization. Ann. Oper. Res. 37, 357–373 (1992)MathSciNetzbMATHGoogle Scholar
 52.Tawarmalani, M., Sahinidis, N.V.: A polyhedral branchandcut approach to global optimization. Math. Program. 103(2), 225–249 (2005)MathSciNetzbMATHGoogle Scholar
 53.Villanueva, M.E., Houska, B., Chachuat, B.: Unified framework for the propagation of continuoustime enclosures for parametric nonlinear ODEs. J. Glob. Optim. 62(3), 575–613 (2015)MathSciNetzbMATHGoogle Scholar
 54.Vinter, R.: Optimal Control. Springer, Berlin (2010)zbMATHGoogle Scholar
 55.Wang, H., Xiang, S.: On the convergence rates of Legendre approximation. Math. Comput. 81(278), 861–877 (2012)MathSciNetzbMATHGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.