# An Example of a Reproducing Kernel Hilbert Space

- 524 Downloads
- 1 Citations

## Abstract

We formulate and prove a generalization of the isoperimetric inequality in the plane. Using this inequality we construct an unitary space—and in consequence—an isomorphic copy of a separable infinite dimensional Hilbert (Sobolev) space, which turns out also a reproducing kernel Hilbert space.

## Keywords

Isoperimetric inequality Hilbert spaces Reproducing kernels Brun–Minkowski inequality## Mathematics Subject Classification

46E22## 1 Introduction

*o*(

*U*)—and a plane Lebesgue measure—say

*m*(

*U*). The classical isoperimetric inequality asserts, that the following inequality holds:

*U*is a disk.

*to have a well-defined perimeter*is very restrictive. However the convex and compact sets have a perimeter and are measurable and the class of such sets is sufficiently large from the point of view of applications. Moreover this class is closed with respect to standard algebraic operation (

*Minkowski addition*and scalar multiplication). This makes possible to construct in a unique way a vector space—which will be denoded by \(X_{\mathcal {S}}\)—containing all—in our case—centrally symmetric, convex and compact sets. We will extend the definitions of perimeter

*o*and the measure

*m*onto \(X_{\mathcal {S}}\) and we will prove, that for this extended operation, and for each vector \(x\in X_{\mathcal {S}}\) the

*generalized isoperimetric inequality*holds:

## 2 Part I

### 2.1 A Cone of Norms in \(\mathbb R^2\)

Let \(\mathcal {S}\) denote the family of all subsets of \(\mathbb R^2\), which are non-empty, compact and centrally symmetric. In \(\mathcal {S}\) we consider the so-called Minkowski addition and the multiplication by positive scalars. We recall below the definitions by the Formulas (3) and (4).

*V*has non-empty interior, then there is in \(\mathbb R^2\) a norm \(||\cdot ||_{V}\), such that

*V*is a unit ball for this norm. However we admit in \(\mathcal {S}\) also the sets with empty interiors, i.e. the sets of the form:

*a diangle*.

*cancellation law*[6], i.e. the following property:

*the Radström embedding theorem*:

### Proposition 1

*U*and we will also write \(\mathcal {S}\) instead of \(i(\mathcal {S})\).

The construction described in Proposition 1 can be realized not only in \(\mathbb R^2\), but also in the case of general Banach spaces, or even locally convex spaces and without the claiming of central symmetry. The details are to be found in many papers, for example in [2].

*W*can be written in the form:

### 2.2 Width Functionals

*width functionals*on \(\mathcal {S}\). Suppose that \(f:\mathbb R^{2}\longrightarrow \mathbb R\) is a linear functional on \(\mathbb R^2 \). For \(V\in \mathcal {S}\) we set:

*the width of a set*with respect to a given direction.

Let \(V\in \mathcal {S}\) be a set and let *k* be a straight line containing the origin.

### Definition 2

The width of *V* with respect to *k*—denoted by \(\overline{V}(k)\)—is the lower bound of all numbers \(\varrho (k_1,k_2)\) where \(k_1\) and \(k_2\) are the straight lines parallel to *k*, *V* lies between \(k_1\) and \(k_2\) and \(\varrho (k_1,k_2)\) is the distance of lines \(k_1\) and \(k_2\).

*k*there exists a unique (up to the sign) linear functional \(f_k\), such that \(||f_k|| = 1\) and

*k*is the kernel of \(f_k\). Clearly we have

*I*determines the unique straight line

*k*, such that \(I\subset k\), and hence we may speak of the width of the set

*V*with respect to the diangle

*I*, which will be denoted by \(\overline{V}(I)\).

### 2.3 Formulation of the Main Problem

*Steiner formula*. Namely for \(U\in {\mathcal {S}}\), for \(B=B(0,1)\)—(where

*B*is the unit disc) and for \(\lambda \in [0,\infty )\) the Steiner formula says that:

*o*(

*U*) is the perimeter of the convex set

*U*. Since, as we have observed above, the cone \({\mathcal {S}}\) is a subcone of the covering space \(X_{\mathcal {S}}\), it is natural to expect that \(m|_{\mathcal {S}}\) is a restriction of some “good” function \(m^{*}:X_{\mathcal {S}}\longrightarrow \mathbb R \). This is true. Namely we will prove that:

### Theorem 3

With the notations as above, there exists a unique polynomial of the degree two, \(m^{*}:X_{\mathcal {S}}\longrightarrow \mathbb R\), such that, \(m^{*}|_{\mathcal {S}} = m|_{\mathcal {S}}\).

The proof of this theorem is perhaps not too difficult, but it is rather long, since we must construct some unknown object, namely the polynomial \(m^{*}\). The crucial point is the proof of the fact, that the necessary formula for \(m^{*}\) does not depend on the choice of the representatives. We will do the proof in a number of steps.

### 2.4 Construction of the Extension of *m* onto \(X_{\mathcal {S}}\)

### Proposition 4

- (a)
For all \(U,V,W \in \mathcal {S}\) we have: \(\Psi (U+W,V+W)=\Psi (U,V);\)

- (b)
For all \(A_1,A_2,B\in \mathcal {S}\) we have: \({\mathcal {M}}(A_1+A_2,B)={\mathcal {M}}(A_1,B)+{\mathcal {M}}(A_2,B);\)

- (c)
For all \(U,V,P,Q \in \mathcal {S}\) we have: \((U,V)\diamond (P,Q)\Longrightarrow \Psi (U,V)=\Psi (P,Q).\)

### Proof

Since \((U+W,V+W)\diamond (U,V)\) then (c) \(\Longrightarrow \) (a) To prove the implication (a) \(\Longrightarrow \) (b) let us fix sets \(A_1,A_2,B\in \mathcal {S}\).

Before proving (b) \(\Longrightarrow \) (c) we shall observe some properties of the function \(\mathcal {M}\). The condition (b) means, that \(\mathcal {M}\) is additive with respect to the first variable. But \(\mathcal {M}\) is obviously symmetric, then \(\mathcal {M}\) is additive with respect to both variables separately.

*A*and

*B*from \(\mathcal {S}\). We shall prove that for each \(\lambda \ge 0\) there is:

*u*satisfies the so-called

*Cauchy functional equation*, i.e.

*u*is locally bounded since if \(\alpha = diam(A)\), \(\beta = diam(B)\), then the set \(tA+B\) is contained in the ball with the radius \(t\alpha +\beta \). It follows from the known theorem on Cauchy equation, that \(u(t)=tu(1)\). This means that \(\mathcal {M}\) is positively homogenous.

Now we shall prove the implication (b) \(\Longrightarrow \) (c). Let us fix two pairs (*U*, *V*) and (*X*, *Y*) such that \((U,V)\diamond (X,Y)\). We will check that \(\Psi (U,V)=\Psi (X,Y)\).

*Y*we obtain \(X+Y+V = 2Y+U\). Now we add

*V*to both sides of the last equality and we obtain: \(X+Y+2V=2Y+U+V\). Thus \(m(X+Y+2V)=m(U+V+2Y)\). Now we use the definition of the function \(\mathcal {M}\) and we obtain:

*Y*by

*X*and

*V*by

*U*) we obtain the equality:

To complete Proposition 4 we shall prove that the condition (a) from this Proposition is true. More exactly we shall prove the following

### Proposition 5

### Proof

First we will check, that the equality (24) is true for the sets *U* having the form \(U=\sum _{1}^{k}I_j\), i.e. for the polygons from the cone \(\mathcal {W}\). We apply the induction with respect to *k*, i.e. the number of diangles representing *U*.

*I*is the same as the direction of the diangle

*I*), then \(L= 2m(X)+2m(Y)-m(X+Y)= \Psi (X,Y)\), which ends the proof for \(k=1\). Let us notice, that we used the

*Cavaleri principle*. More exactly the equality \(m(X+I)= m(X)+2\cdot \overline{X}\cdot d\) and similar equalities results from the Cavalieri principle.

*X*,

*Y*and for all set \(U'\) such that \(U'=\sum _{1}^{k-1} d_j\cdot I_j\). Fix now some sets

*X*and

*Y*and a set \(U=\sum _{1}^{k} d_j\cdot I_j\). Let us denote \(X'= X+\sum _{1}^{k-1} d_j\cdot I_j\), \(Y'=Y+\sum _{1}^{k-1} d_j\cdot I_j\) and \(I(\mathbf{{v}}_k,d_k)= I(\mathbf{{v}},d)=I\). Then

*X*,

*Y*,

*U*we construct a sequence of polygons \(U_k\) convergent to the set

*U*in the sense of Hausdorff distance. Since the measure

*m*(in \(\mathcal {S}\)) is continuous with respect to the Hausorff distance, then it is sufficient to pass to the limit in the sequence of equalities:

### 2.5 A Bilinear Form

### Proposition 6

### Proof

First we shall establish the relations between the function \(\mathcal {M}\) defined above by (18) and the function \(\widetilde{M}\) defined by (26).

\(\square \)

### Proposition 7

### Proof

Hence Proposition 6 is proved. It means that the function \(\widetilde{M}\) is well defined as a function on the vector space \(X_{\mathcal {S}}\).

It follows from Formulas (27) that \(\widetilde{M}\) is additive with respect to each variable separately. Moreover, as we have observed earlier, the function \({\mathcal {M}}\) is homogenous for nonnegative scalars. Then, using (27) we easily check, that \(\widetilde{M}\) is homogenous also for negative reals. In consequence we may state, that \(\widetilde{M}\) is a bilinear form on \(X_{\mathcal {S}}\).

*w*is a homogenous polynomial then:

## 3 Part II

**A Generalization of the Isoperimetric Inequality.**

Now, when we have the polynomial \(m^{*}\) and the bilinear form \(\widetilde{M}\), we may consider the problem of generalization of different properties of the measure *m* (classical) onto the measure \(m^{*}\) (generalized). In the sequel we will use for \(m^{*}\) the name *measure* and we will write *m* instead of \(m^{*}\).

*U*and

*V*are, in particular convex and closed, then they have the perimeter. Let us denote the perimeter of the set \(U\in {\mathcal {S}}\) by

*o*(

*U*). It is known, that the correspondence

*B*is a unit disk. Since

*m*is a homogenous polynomial of the second degree, generated by the the form \(\widetilde{M}\) (bilinear and symmetric) then, it is easy to check, that:

### Proposition 8

### Proof

Now we are ready to formulate the following generalization of the isoperimetric inequality:

### Theorem 9

*m*instead of \(m^{*}\)).

Let us observe that (34) agree with (2). Let us observe also, that this is in fact a generalization of the classical isoperimetric inequality, since when *V* is trivial (\(V = \left\{ \theta \right\} \)) then (34) gives (29).

The proof of Theorem 9 is not quite trivial since now the right hand side of (34) can be negative. In other words, inequality (34) does not remain true, when one replaces *m*([(*U*, *V*)] by its absolute value (i.e. by |*m*([(*U*, *V*)]|). We will need a number of lemmas.

### Lemma 10

### Proof

*k*of diangles in the representation of

*V*. For \(k=1\), using the Cavalieri principle, we have:

*k*-angle \(V=\sum _{j=1}^{k}I_j\). Let us fix an arbitrary \(U\in \mathcal {S}\) and \(2(k+1)\)-angle \(V=\sum _{j=1}^{k+1}I_j\). We have:

*k*(inductive assumption ) and the additivity of the width of a set [Formula (15)] with respect to the addition in \(\mathcal {S}\) and we obtain that the above equals:

### 3.1 Polygons in Singular Position

As we observed above each polygon \(W\in {\mathcal {W}}\subset \mathcal {S}\) is a Minkowski sum of a number of diangles. Each diangle *I* has a form \(I=I(\mathbf{{v}},d)=[-d,d]\cdot \mathbf{{v}}\). The direction of the vector \(\mathbf{{v}}\) will be called the direction of the diangle *I*. We will say that two diangles are parallel, when they have the same direction.

### Definition 11

Let us consider two polygons from \(\mathcal {W}\) and let us denote them \(U=\sum _{i=1}^{n}J_i\) and \(V=\sum _{j=1}^{k}I_j\). We will say that *U* and *V* are in *singular position*, when there exists \(i_0\le n\) and \(j_0\le k\), such that the diangles \(J_{i_0}\) and \(I_{j_0}\) are parallel.

In other words the singular position means that at least one side of the polygon *U* is parallel to a side of the polygon *V*.

*a rotation function*. For a given angle \(\varphi \in [0,\pi )\) we denote by \(O(\varphi )\) the rotation of the plane \(\mathbb R^2\) determined by the angle \(\varphi \). The image of the set \(W\in \mathcal {S}\) by \(O(\varphi )\), i.e. the set \(O(\varphi )(W)\) will be denoted by \(W^{\varphi }\). Since the rotations are linear, they preserve the Minkowski sums. In particular, if \(V=\sum _{j=1}^{k}I_j\), then

*a rotation function*): (\(\varphi \in [0,\pi ]\))

*V*is centrally symmetric, then we have \(V^{\varphi +\pi } = V^{\varphi }\). It follows from this equality, that the function

*E*can be considered, if necessary, as a periodic function on whole \(\mathbb R\). It is also clear, that for fixed

*U*and

*V*the function

When the polygon \(V^{\varphi }\) changes its position with \(\varphi \) and the polygon *U* remains fixed, then, in general for more than one \(\varphi \) the polygons *U* and \(V^{\varphi }\) are in singular position.

We shall prove the following lemma.

### Lemma 12

If \({\varphi }_0\in [0,\pi ]\) is a point, in which the function *E* attains its minimum, then the pair \((U,V^{\varphi _0})\) is in singular position.

*m*is invariant under rotations) and since \(dI^{\varphi }\) does not depend on \(\varphi \), we conclude that the inequality \(E(\varphi _0)\le E(\varphi )\) is equivalent to the inequality:

*F*attains its absolute minimum at the same point as the function

*E*, i.e. at \(\varphi _0\). Hence it is sufficient to show an equivalent form of Lemma 12. Namely:

### Proposition 13

If *F* attains its absolute minimum at \(\varphi _0\), then the pair \((U,V^{\varphi _0})\) is in singular position.

To prove Proposition 13 we will need some observations concerning the so-called *interval-wise concave* functions.

### 3.2 Interval-Wise Concave Functions

Consider a function \(H:\mathbb R\longrightarrow \mathbb R\), which is continuous and periodic with period \(\pi \). Hence, in particular \(H(0)=H(\pi )\).

### Definition 14

*H*as above is

*interval-wise concave*when there exists a sequence \((\alpha _k)_{k=0}^{n}\) such that:

- (i)
\(0=\alpha _0<\alpha _1<\cdots <\alpha _k=\pi \),

- (ii)
For each \(0\le i \le k-1\) the restriction of the function

*H*to the interval \([\alpha _i,\alpha _{i+1}]\) is concave.

We will need the following simple properties of the interval-wise concave functions:

### Proposition 15

(a) If *H* is interval-wise concave and \(\lambda \) is a non-negative scalar, then \(\lambda \cdot H\) is interval-wise concave. (b) The sum of two (or a finite number) interval-wise concave functions is an interval-wise concave function.

### Proof

The property (a) is obvious, since the product of a concave function by a non-negative real is still concave.

In the proof of the property (b) we will use the language from the Riemman integral theory. We will call *the nets * the sequences of the type \(0=\alpha _0<\alpha _1<\cdots <\alpha _k=\pi \). If we have two such nets, say \(\alpha =(\alpha _i)_{0}^{k}\) and \(\beta =(\beta _j)_{0}^{m}\), then we define a net \(\gamma =(\gamma _l)_{0}^{s}\) as the union of points of the nets \(\alpha \) and \(\beta \) with natural ordering. In such a case we will say, that \(\gamma \) is *finer* than \(\alpha \) (and clearly also than \(\beta \)) or, equivalently, that \(\alpha \) is a *subnet* of the net *gamma*. It is also clear, that if a function *H* is concave in each interval of a net \(\alpha \) and \(\gamma \) is finer than \(\alpha \), then *H* is concave in each interval of the net \(\gamma \). Hence for each interval-wise concave function *H* there exists a net \(\alpha _H\), which is *maximal* with respect to *H* in the following sense: *H* is concave in each subinterval of the net \(\alpha _H\) and if *H* is concave in the subintervals of a net \(\gamma \), then \(\gamma \) is finer than \(\alpha _H\).

Suppose now, that we have two interval-wise concave functions *H* and *G*. Suppose, that the function *H* is concave in the intervals \([\alpha _i,\alpha _{i+1}]\) of a net \(\alpha =(\alpha _i)_{0}^{k}\) and the function *G* is concave in the intervals \([\beta _j,\beta _{j+1}]\) of a certain net \(\beta =(\beta _j)_{0}^{m}\). Let \(\gamma =(\gamma _l)_{0}^{s}\) be the union of the nets \(\alpha \) and \(\beta \). Each subinterval \([\gamma _l,\gamma _{l+1}]\) is the intersection of the subintervals of the type \([\alpha _i,\alpha _{i+1}]\) and \([\beta _j,\beta _{j+1}]\). Hence for each \(r\le s\) the functions \(H|[\gamma _r,\gamma _{r+1}]\) and \(G|[\gamma _r,\gamma _{r+1}]\) are both concave. In consequence the function \(H+G\) is concave in the subintervals of the net \(\gamma =(\gamma _l)_{0}^{s}\), and this ends the proof of the property (b). \(\square \)

Let us observe that:

### Proposition 16

Let *J* and *I* be two diangles. Then the function \(\overline{J_{I^\varphi }}\) considered on the interval \([-\frac{\pi }{2},\frac{\pi }{2}]\) is interval-wise concave.

### Proof

*J*and with respect to the scalar multiplication. Hence without loss of generality we may assume that \(J=[-1,1]\cdot (1,0)\). For the same reason we may assume, that \(I=[-1,1]\cdot (0,1)\). In such a case, as it is easy to check, that

Let us mention here, that for each function of the type \(\overline{J_{I^\varphi }}\) there exists a point \(\alpha \) such that \(\overline{J_{I^\varphi }}\) is concave in each interval \([-\frac{\pi }{2},\alpha ]\) and \([\alpha ,\frac{\pi }{2}]\) and is not concave in any neighbourhood of the point \(\alpha \) and \(\alpha \) is a point such that *J* is parallel to \(I^{\alpha }\).

### Proposition 17

Let \(U=\sum _{k=1}^{m}J_k\) be a polygon from \(\mathcal {W}\) and let *I* be a diangle. Then the function \(\overline{U_{I^\varphi }}\) is interval-wise concave. In consequence the function *F* defined by the Formula (40) is interval-wise concave.

### Proof

Now we are ready to prove Proposition 13 and at the same time Lemma 12.

### Proof

Let \(\varphi _0\) be an argument, at which the function *F* attains its absolute minimum. Clearly, such a point exists, since the function *F* is continuous and periodic. Let \(\alpha =(\alpha _i)_{0}^{k}\) be a net such that *F* is concave in each interval \([\alpha _i, \alpha _{i+1}]\). Clearly, we may assume, that this net is maximal (i.e. \(\alpha = \alpha _F\)), which means, that *F* is not concave in any neighbourhood of the points \(\alpha _i\). Since a concave function cannot attain its minimum at the interior point of the interval in which it is defined, then there exists \(0<\alpha _i<\pi \) such that \(\alpha _i=\varphi _0\). But the ends of the intervals of the net \(\alpha =(\alpha _i)_{0}^{k}\) have such property, that one of diangles \(I_j\) is parallel to the straight line joining two successive vertexes of the polygon U, i.e. is parallel to a diangle \(J_i\). This means that *U* and *V* are in singular position. \(\square \)

### 3.3 The Proof of the Generalized Isoperimetric Inequality

Let us begin by the following observation.

### Observation 18

- (i)
The perimeter of the pair (

*U*,*V*) equals to the perimeter of the pair \((U',V')\); - (ii)
The joint number of sides of the pair \((U',V')\), understood as the sum of the number of sides of \(U'\) and \(V'\), is strictly less that the joint number of sides of the pair (

*U*,*V*); - (iii)
The measure

*m*([*U*,*V*]) is less or equal than the measure \(m([U',V'])\).

### Proof

To prove Observation 18 we set \(U'=U\) and \(V'= V^{\varphi }\), where the angle , \(\varphi \) is such, that the pair j \((U,V^{\varphi })\) is in singular position. More exactly, \(\varphi \) is such that the function *F* attains absolute minimum exactly at \(\varphi \). Now we see, that condition (i) is fulfilled since \(O(\varphi )\) is an isometry.

Moreover we know that \(U'\) and \(V'\) have a pair of parallel sides. This means that there exists a polygon \(U''\) with \((2n-2)\)-angles and a diangle *I* generated by a unit vector such that \(U'= U''+ d_1\cdot I\), and there exists a polygon \(V''\) with \((2k-2)\)-angles, such that \(V'= V'' +d_2I\). Without loss of generality we may assume that \(d_2\le d_1\) since in the opposite case the argument is analogous. If \(d=d_1-d_2\) then the pair \((U',V')\) is equivalent to the pair \((U''+d\cdot I, V'')\). But the joint number of sides of this last pair is strictly less than the joint number of sides of the pair (*U*, *V*). This ends the proof of ii) and in consequence the proof of Observation 18. \(\square \)

### Observation 19

Let *U* and *V* be two polygons as in Observation 18. Then there exists a polygon *W* such that \(o([U,V])= o(W)\) and \(m([U,V])\le m(W)\).

### Proof

We can continue the reduction—described in the proof of Observation 18—of the joint number of sides of the pair (*U*, *V*), which preserves the perimeter and increases the measure to the moment, when one of successively constructed polygons became trivial. But in this case the last of constructed pairs of type \((U',V')\) will be equivalent to the pair \((W,\left\{ 0\right\} )\). This ends the proof of Observation 19. \(\square \)

### Observation 20

*U*,

*V*from \(\mathcal {W}\) the following inequality holds:

### Proof

*U*,

*V*). Using Observation 19 we choose a polygon

*W*satisfying the properties formulated in Observation 19. Then we have:

Now we are able to finish the proof of the generalized isoperimetric inequality formulated in Theorem 9.

### Proof

*o*and measure

*m*are continuous with respect to the considered convergence [4]. Since, by Observation 20, the generalized isoperimetric inequality is true for polygons (pairs from \(\mathcal {W}\)), so we have the following sequence of inequalities:

*o*and

*m*on the choice of representatives, we obtain Theorem 9. \(\square \)

### 3.4 The Problem of Equality in the Generalized Isoperimetric Inequality

*U*is a disc. One may say equivalently that \(o^2(U) = 4\pi m(U)\) if and only if \(U = \lambda B\) where

*B*is a unit disc and \(\lambda \) is a non-negative real. We shall prove an analogous result for the generalized isoperimetric inequality (2). Namely we have the following:

### Theorem 21

*x*belongs to the one dimensional subspace generated by the unit disc

*B*. In other words

We start be recalling a well known result concerning the quadratic forms.

### Lemma 22

*x*, and let \(\phi :X\times X\longrightarrow \mathbb R\) be a bilinear, symmetric form, such that \(\phi (x,x)=\varphi (x)\). Then the following inequality (Schwarz inequality) holds:

*deficit term*:

*D*is a quadratic form on \({X}_{\mathcal {S}}\) and by the generalized isoperimetric inequality we have \(D(x)\ge 0\). Let \(\varepsilon (x,y)\) be a bilinear, symmetric form generating

*D*. It is easy to check, that for \(x=[U,V]\) and \(y=[P,Q]\) we have:

### Proposition 23

Now we shall prove the next lemma. Namely

### Lemma 24

- (a)
\(o(x)=o(y),\)

- (b)
\(D(x-y)=0\).

*x*and

*y*are homothetic, i.e. there exists \(\lambda \in \mathbb R\) such that \(x=\lambda y\).

### Proof

*x*and

*y*are as above. It follows from (b) that:

*x*and

*y*are from \(\mathcal {S}\), so we are able to apply the Brun–Minkowski equality and we have:

*x*and

*y*in the Brun–Minkowski inequality the equality holds. It is known that in such a case

*x*and

*y*are homothetic. \(\square \)

Now we are ready to prove Theorem 21.

### Proof

*r*is such that \(o(z)=0\). In other words

*r*is such, that \(o([U,V])=2\pi r\). We shall calculate the error term of

*z*, namely

*o*we have \(o(V)+ 2\pi r= \lambda o(U)\). In consequence \(o(V) + o(U)- o(V) = \lambda o(U)\). This means that \(\lambda = 1\), i.e. \(U=V+rB\) or \([U,V]=r[B,\left\{ {\theta }\right\} ]\) and this ends the proof of Theorem 21. \(\square \)

## 4 Part III

### 4.1 A Generalization of the Brun–Minkowski Inequality

### 4.2 Some Remarks on Quadratic Forms

We shall start this chapter by recalling some properties of quadratic forms on real vector spaces.

1. Let *X* be a real vector space and let \(\eta :X\longrightarrow \mathbb R\) be a quadratic form on *X*, i.e. \(\eta \) is a homogeneous polynomial of the second degree. This means that there exists a bilinear, symmetric form \(\widetilde{N}:X\times X\longrightarrow \mathbb R\) such that \(\eta (x) = \widetilde{N}(x,x)\).

2. In the notations as above, a form \(\eta \) is said to be positively (negatively) defined, when \(\eta (x)=0\) implies \(x=\theta \). We will say also, that \(\eta \) is *elliptic.* Equivalently, “elipticity” means, that the set of values of \(\eta \) is \([0,\infty )\) or \((-\infty ,0]\).

We will also consider the indefinite forms, i.e. forms for which \(\eta :X\longrightarrow \mathbb R\) is surjective. In such a case we will also say that the form \(\eta \) is *hyperbolic.* Clearly, a form \(\eta \) is hyperbolic, if and only if there exists two vectors \(x\in X\ni y\) such that \(\eta (x)>0\) and \(\eta (y)<0\).

3. In this paper we will consider the quadratic forms, which will be hyperbolic, but of some special type i.e. satisfying some additional property. Before defining this property, let us observe, that when we have a quadratic form \(\eta :X\longrightarrow \mathbb R\) and we take into account any subspace \(Y\subset X\), then the restriction \(\eta |_{Y}\) is a quadratic form on *Y*. If \(\eta \) is of elliptic type, then for each *Y* the form \(\eta |_{Y}\) is elliptic. But in the case when \(\eta \) is hyperbolic, then \(\eta |_{Y}\) in general may not be hyperbolic.

4. Let \(u\in X \ni v\) be two vectors, which are linearly independent. Let \(Y(u,v):=Lin(u,v)\) be a two dimensional subspace spanned by *u* and *v*. We will say, that a quadratic form \(\eta \) is \((u,v)-hyperbolic\), when \(\eta |_{Y(u,v)}\) is hyperbolic. Let us consider the situation as above. Let \(\eta \) be a quadratic form on *X*. We will prove the following lemma:

### Lemma 25

- 1.
There is a vector \(b\in X\), such that the form \(\eta \) is positively defined on the one dimensional subspace \(\mathbb R\cdot b\),and

- 2.
There is a linear functional \(b^{*}\) on

*X*such that \(\eta \) is negatively defined on the subspace \(Y= ker {b}^{*}\). Then the form \(\eta \) is hyperbolic on each plane*L*(*u*,*v*) generated by two linearly independent vectors*u*and*v*, such that \(\eta (u)> 0\) or \(\eta (v)> 0\).

### Proof

*u*and

*v*are two linearly independent vectors such that \(\eta (u)> 0\) and \(\eta (v)> 0\). Consider the straight line \(\mathbb R \ni t \longrightarrow u+t\cdot v\). We check, that there exists a vector \(w\in L(u,v)\), such that \({b}^{*}(w)=0\). Indeed, it is sufficient to take

*L*(

*u*,

*v*) contains two vectors, namely

*u*and

*w*, such that \(\eta (u)>0\) and \(\eta (w)<0\). This is sufficient for the form \(\eta \) to be hyperbolic on

*L*(

*u*,

*v*). \(\square \)

### Observation 26

### Proof

### 4.3 A Corollary for the Generalized Lebesgue Measure

Let, as above, *m* denote the Lebesgue measure on \(X_{\mathcal {S}}\), which is, as we know, a quadratic form. Let *B* denote the unit disc. It is easy to check, that *m* satisfies the assumptions of the Lemma 25. Indeed *m* is positively defined on one dimensional subspace generated by \(b=B\) and as the functional \({b}^{*}\) we take the perimeter functional \([U,V]\longrightarrow o(U)-o(V)\). If \(o([U,V]=0\) then it follows from the generalized isoperimetric inequality, that \(m([U,V])\le 0\) and \(m([U,V])=0\) only when \([U,V]=0\). Hence *m* is negatively defined on the kernel of the functional *o*.

*U*,

*V*] and [

*P*,

*Q*] be two vectors from \(X_{\mathcal {S}}\) having a positive measure (i.e. such that \(m([U,V])>0\) and \(m([P,Q])>0\)). It follows from the considerations made above for \(\eta =m\) and \(\widetilde{N}=\widetilde{M}\), that

*m*in \(\mathcal {S}\) and Minkowski addition we may write the inequality (46) in the following form:

It is clear, that (47) is true also, when at least one of vectors *x* or *y* has non-negative measure. However (47) may not be true, when both *x* and *y* have negative measure.

*o*. This implies, that at least for one vector \(w\in \mathbb {L}\) we have \(m(w)<0\). If we know that for some vector \(u\in \mathbb {L}\) there is \(m(u)>0\) then

*m*restricted to \(\mathbb {L}\) is a hyperbolic form on \(\mathbb {L}\). This means that there exists a linear isomorphism

*x*and

*y*are linearly dependent.

## 5 Part IV

**Connection to Hilbert Space.**

### 5.1 Definition of an Inner Product

*o*is linear and the bilinearity of \(\widetilde{M}\) was proved in Part I.

*D*is the deficit term defined by Formula (42). Hence \(o^2([U,V])=0\) and \(D([U,V])=0\). It follows from Theorem 21, that \(([U,V])= rB\) where

*B*is the unit disc. Since \(0=o^2([U,V])= r^2o^2(B)\) then \(r=0\) and in consequence \(([U,V])=0\).

### 5.2 The Constructed Space is an RKHS

The elements of the space \(X_{\mathcal {S}}\) are the equivalent classes of the pairs (*U*, *V*) of convex and centrally symmetric sets *U* and *V*. In appears, that the vectors from \(X_{\mathcal {S}}\) may be also considered as some periodic and continuous real functions on \(\mathbb R\). More precisely, let \(\mathcal {F}\) denotes the space of all periodic (with the period \(\pi \)) continuous real functions equipped with standard addition and scalar multiplication. Let \(\varphi \in [0,\pi ]\) and let \(I^{\varphi }\) be a diangle, whose argument is \(\varphi \). We have considered above the width functionals associated with a diangle defined in Sect. 2.2. Using this functionals we can prove the following:

### Proposition 27

### Proof

The independence on the choice of a representative follows directly from the linearity of the width functionals on \(\mathcal {S}\) and from the definition of the equivalence relation \(\diamond \). The injectivity is a consequence of the Radström Lemma [6] and the linearity of \(\omega \) follows directly from the definition of addition and scalar multiplication. Also continuity and periodicity are easy to check. \(\square \)

It will be more convenient to consider the vectors from \(X_{\mathcal {S}}\) as the continuous functions *f* on the interval \(\Delta =[0,\pi ]\) such that \(f(0)=f(\pi )\). We shall prove that the space \(X_{\mathcal {S}}\) is a *reproducing kernel Hilbert space* (RKHS for short). The reproducing kernel Hilbert spaces was discovered at the beginning of XX-th century by Zaremba. A general theory of RKHS was formulated by Aronszajn in [1]. An elegant introduction to this theory is in Szafraniec book [7]. The necessary definitions concerning RKHS are to be found in [5].

*B*is the unit disc and \(I^{\varphi }\) is a diangle defined by the vector \(\mathbf{{v}}=(cos\varphi , sin\varphi )\).

### Proposition 28

*the kernel functions*and Proposition 28 asserts precisely, that the space \(X_{\mathcal {S}}\) with the inner product given by (50) has the

*reproducing property*. Now we shall calculate the

*reproducing kernel*of this space. As we know, the reproducing kernel is—in our case—a function \(K:\Delta \times \Delta \longrightarrow \mathbb R\) given by the formula:

### Theorem 29

*evaluation functional*is bounded. In our case this means that for each \(\varphi \in [0,\pi ]\) there exists a constant (in general depending on \(\varphi \)) \(C(\varphi )\), such that for each \([U,V]\in X_{\mathcal {S}}\) there is:

*C*(

*K*) of continuous function on some compact set

*K*. One considers also another norm on \(X_{\mathcal {S}}\), namely

### Remark 30

*S*denotes the unit sphere in \(\mathbb R^{n}\), then the function on

## References

- 1.Aronszajn, N.: Theory of reproducing kernels. Trans. Am. Math. Soc.
**68**, 337–404 (1950)MathSciNetCrossRefzbMATHGoogle Scholar - 2.Hörmander, L.: Sur la fonction d’appui des ensembles convexes dans un espace localement convexe. Ark. Math.
**3**, 181–186 (1954)MathSciNetCrossRefzbMATHGoogle Scholar - 3.Klain, D.: An error estimate for the isoperimetric deficit. Ill. J. Math.
**49**, 981–992 (2005)MathSciNetzbMATHGoogle Scholar - 4.Moszyńska, M.: Geometria zbiorów wypukłych. Wydawnictwa Naukowo-Techniczne, Warszawa (2001). (in Polish)Google Scholar
- 5.Paulsen, V.I.: An Introduction to the Theory of Reproduction Kernel Hilbert Spaces. Department of Mathematics, University of Huston, Texas. https://www.math.uh.edu/~vern/rkhs.pdf
- 6.Radström, H.: An embedding theorem for spaces of convex sets. Proc. Am. Math. Soc.
**3**(1), 165–169 (1952)MathSciNetCrossRefGoogle Scholar - 7.Szafraniec, F.H.: The Reproducing Kernel Property and Its Space: The Basics. Springer, Berlin (2015)zbMATHGoogle Scholar
- 8.Treibergs, A.: Inequalities that Imply the Isoperimetric Inequality. University of Utah. (2002). www.math.utah.edu/~treiberg/isoperim/isop.pdf

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.