## Abstract

Model-order reduction methods tackle the following general approximation problem: find an “easily-computable” but accurate approximation \(\hat {\boldsymbol {h}}\) of some target solution *h*^{⋆}. In order to achieve this goal, standard methodologies combine two main ingredients: (i) a set of partial observations of *h*^{⋆} and (ii) some “simple” prior model on the set of target solutions. The most common prior models encountered in the literature assume that the target solution *h*^{⋆} is “close” to some low-dimensional subspace. Recently, triggered by the work by Binev et al. (SIAM/ASA Journal on Uncertainty Quantification **5**(1), 1–29, 2017), several contributions have shown that refined prior models (based on a *set* of embedded approximation subspaces) may lead to enhanced approximation performance. In this paper, we focus on a particular decoder exploiting such a “multi-space” information and evaluating \(\hat {\boldsymbol {h}}\) as the solution of a constrained optimization problem. To date, no theoretical results have been derived to support the good empirical performance of this decoder. The goal of the present paper is to fill this gap. More specifically, we provide a mathematical characterization of the approximation performance achievable by this variational “multi-space” decoder and emphasize that, in some specific setups, it has provably better recovery guarantees than its standard “single-space” counterpart. We also discuss the similarities and differences between this decoder and the one proposed in Binev et al. (SIAM/ASA Journal on Uncertainty Quantification **5**(1), 1–29, 2017).

This is a preview of subscription content, log in to check access.

## Notes

- 1.
We remind the reader that we assume

*m*=*n*.

## References

- 1.
Argaud, J., Bouriquet, B., Gong, H., Maday, Y., Mula, O.: Stabilization of (G)EIM in presence of measurement noise: application to nuclear reactor physics. In: Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2016, pp 133–145. Springer (2017)

- 2.
Babuska, I.: Error-bounds for finite element method. Numerische Mathematik

**16**, 322–333 (1970/71) - 3.
Binev, P., Cohen, A., Dahmen, W., DeVore, R., Petrova, G., Wojtaszczyk, P.: Data assimilation in reduced modeling. SIAM/ASA Journal on Uncertainty Quantification

**5**(1), 1–29 (2017). https://doi.org/10.1137/15M1025384 - 4.
Chaturantabut, S., Sorensen, D.: Nonlinear model reduction via discrete empirical interpolation. SIAM J. Sci. Comput.

**32**(5), 2737–2764 (2010). https://doi.org/10.1137/090766498 - 5.
Everson, R., Sirovich, L.: Karhunen-Loève procedure for gappy data. J. Opt. Soc. Am. A

**12**(8), 1657–1664 (1995). https://doi.org/10.1364/josaa.12.001657 - 6.
Fick, L., Maday, Y., Patera, A. T., Taddei, T.: A reduced basis technique for long-time unsteady turbulent flows. ArXiv e-prints (2017)

- 7.
Herzet, C., Diallo, M., Héas, P.: Beyond petrov-galerkin projection by using multi-space prior. In: European Conference on Numerical Mathematics and Advanced Applications (Enumath’17). (https://hal.inria.fr/hal-02173637v1), Voss, Norway (2017)

- 8.
Herzet, C., Diallo, M., Héas, P.: Beyond Petrov-Galerkin projection by using multi-space prior. In: Model Reduction of Parametrized Systems IV (MoRePaS’18). (https://hal.inria.fr/hal-01937876), Nantes, France (2018)

- 9.
Maday, Y., Mula, O., Patera, A.T., Yano, M.: The generalized empirical interpolation method: Stability theory on hilbert spaces with an application to the stokes equation. Computer Methods in Applied Mechanics and Engineering

**287**, 310–334 (2015). https://doi.org/10.1016/j.cma.2015.01.018. http://www.sciencedirect.com/science/article/pii/S0045782515000389 - 10.
Maday, Y., Mula, O., Turinici, G.: Convergence analysis of the generalized empirical interpolation method. SIAM J. Numer. Anal.

**54**(3), 1713–1731 (2016) - 11.
Quarteroni, A., Manzoni, A., Negri, F.: Reduced Basis Methods for Partial Differential Equations, vol. 92. Springer International Publishing, Berlin (2016). http://www.springer.com/us/book/9783319154305

## Funding

The authors thank the “Agence nationale de la recherche” for its financial support through the Geronimo project (ANR-13-JS03-0002).

## Author information

## Additional information

### Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: *Model Reduction of Parametrized Systems*

Guest Editors: Anthony Nouy, Peter Benner, Mario Ohlberger, Gianluigi Rozza, Karsten Urban and Karen Willcox

Communicated by: Anthony Nouy

## Appendix: Proof of (42)

### Appendix: Proof of (42)

In this appendix, we show that the cost function \(f({{\boldsymbol {h}}})\triangleq {\sum }_{j=1}^{{n}} {\left ({y}_{j}- \left \langle {{\boldsymbol w}_{j},{{\boldsymbol {h}}}}\right \rangle \right )}^{2}\) can be rewritten as in (42) when ** h** ∈

*V*

_{n}and

*σ*

_{n}> 0.

^{Footnote 1}First, using the definition of

*y*

_{j}, we have

Moreover, using the particular bases introduced in Section 5.1.1, we obtain

where the first equality follows from the fact that \(\{{{\boldsymbol {w}}}_{j}\}_{j=1}^{{n}}\) and \(\{{{\boldsymbol {w}}}_{j}^{*}\}_{j=1}^{{n}}\) differ up to an orthogonal transformation; the second is a consequence of (55) and our hypothesis ** h** ∈

*V*

_{n}.

Since \({{\hat {\boldsymbol {h}}}_{\text {SS}}}\) corresponds to the minimum of *f*(** h**) over

*V*

_{n}(see (17)), we simply have

if *σ*_{n} > 0. Hence, under this assumption, (2) can also be rewritten as in (42).

## Rights and permissions

## About this article

### Cite this article

Herzet, C., Diallo, M. Performance guarantees for a variational “multi-space” decoder.
*Adv Comput Math* **46, **10 (2020). https://doi.org/10.1007/s10444-020-09746-6

Received:

Accepted:

Published:

### Keywords

- Model-order reduction
- Multi-space prior information
- Performance guarantees

### Mathematics Subject Classification (2010)

- 41A99