Complex Analysis and Operator Theory

, Volume 13, Issue 1, pp 193–221 | Cite as

An Example of a Reproducing Kernel Hilbert Space

  • Edward TutajEmail author
Open Access


We formulate and prove a generalization of the isoperimetric inequality in the plane. Using this inequality we construct an unitary space—and in consequence—an isomorphic copy of a separable infinite dimensional Hilbert (Sobolev) space, which turns out also a reproducing kernel Hilbert space.


Isoperimetric inequality Hilbert spaces Reproducing kernels Brun–Minkowski inequality 

Mathematics Subject Classification


1 Introduction

Let \(U\subset \mathbb R^{2}\) be a compact set having well-defined perimeter—say o(U)—and a plane Lebesgue measure—say m(U). The classical isoperimetric inequality asserts, that the following inequality holds:
$$\begin{aligned} o^2(U)\ge 4\pi \cdot m(U), \end{aligned}$$
together with the additional remark, that equality holds if and only if U is a disk.
The isoperimetric inequality is commonly known and as one of oldest inequalities in mathematics, it has many proofs. An excellent review is in [8] and some recent ideas are to be found in [3]. The inequality (1) may be generalized to higher dimensions, but in this paper we will consider only the plane case. The possibilities to generalize the inequality (1) for some larger class of subsets of \(\mathbb R^2\) are practically exhausted, since the claiming to have a well-defined perimeter is very restrictive. However the convex and compact sets have a perimeter and are measurable and the class of such sets is sufficiently large from the point of view of applications. Moreover this class is closed with respect to standard algebraic operation (Minkowski addition and scalar multiplication). This makes possible to construct in a unique way a vector space—which will be denoded by \(X_{\mathcal {S}}\)—containing all—in our case—centrally symmetric, convex and compact sets. We will extend the definitions of perimeter o and the measure m onto \(X_{\mathcal {S}}\) and we will prove, that for this extended operation, and for each vector \(x\in X_{\mathcal {S}}\) the generalized isoperimetric inequality holds:
$$\begin{aligned} o^2(x)\ge 4\pi \cdot m(x) \end{aligned}$$
Using the inequality (2) we will define an inner product in the considered space vector space \(X_{\mathcal {S}}\), whose completion gives a model of a separable Hilbert space.

2 Part I

2.1 A Cone of Norms in \(\mathbb R^2\)

Let \(\mathcal {S}\) denote the family of all subsets of \(\mathbb R^2\), which are non-empty, compact and centrally symmetric. In \(\mathcal {S}\) we consider the so-called Minkowski addition and the multiplication by positive scalars. We recall below the definitions by the Formulas (3) and (4).

Assume that \(V\in {\mathcal {S}}\ni W\), \(\lambda \ge 0\). We set:
$$\begin{aligned} V+W= & {} \left\{ v+w: v\in V, w\in W\right\} , \end{aligned}$$
$$\begin{aligned} \lambda \cdot V= & {} \left\{ \lambda \cdot v: v\in V\right\} , \end{aligned}$$
It is not hard to check, that (\(\mathcal {S},+, \cdot \)), i.e. the set \(\mathcal {S}\) equipped with the operations defined above, is a vector cone.
Clearly, each set \(V\in \mathcal {S}\) contains the origin. If such a set V has non-empty interior, then there is in \(\mathbb R^2\) a norm \(||\cdot ||_{V}\), such that V is a unit ball for this norm. However we admit in \(\mathcal {S}\) also the sets with empty interiors, i.e. the sets of the form:
$$\begin{aligned} I(\mathbf{{v}},d)= [-d,d]\cdot \mathbf{{v}}, \end{aligned}$$
where \(\mathbf {v}\) is a unit vector in \(\mathbb {R}^2\), and \(d\ge 0\) is a scalar. The set of the type \(I(\mathbf{{v}},d)\) will be called a diangle.
It is known, that in the cone (\(\mathcal {S},+, \cdot \)) holds the so-called cancellation law [6], i.e. the following property:
$$\begin{aligned} \left( V\in \mathcal {S}, U\in \mathcal {S}, W\in \mathcal {S} \right) \Longrightarrow \left( V+W=U+W \Longrightarrow V=W\right) . \end{aligned}$$
A consequence of the properties formulated above is the following theorem called the Radström embedding theorem:

Proposition 1

There is a unique—up to an isomorphism—vector space \(X_{\mathcal {S}}\) “covering the cone” \(\mathcal {S}\). This space is a quotient of the product \({\mathcal {S}}\times {\mathcal {S}}\) by the equivalence relation \(\diamond \), where:
$$\begin{aligned} (U,V)\diamond (P,Q) \Longleftrightarrow U+Q=V+P. \end{aligned}$$
In \(X_{\mathcal {S}}\) one defines the operations by obvious formulas:
$$\begin{aligned}{}[U,V]+[P,Q]= & {} [U+P,V+Q], \end{aligned}$$
$$\begin{aligned} \lambda \ge 0\Longrightarrow \lambda [U,V]= & {} [\lambda U,\lambda V]\end{aligned}$$
$$\begin{aligned} (-1)[U,V]= & {} [V,U]. \end{aligned}$$
One can check, that the operations in \(X_{\mathcal {S}}\) are well defined (i.e. do not depend on the choice of the representatives) and that the space \(X_{\mathcal {S}}\) defined in such a way is unique up to an isomorphism.
The map
$$\begin{aligned} i: \mathcal {S}\ni U\longrightarrow [(U,\left\{ \theta \right\} )]\in X_{\mathcal {S}}, \end{aligned}$$
where \(\theta \) is the origin, is called the canonical embedding and instead of \([(U,\left\{ \theta \right\} )]\), we will write simply U and we will also write \(\mathcal {S}\) instead of \(i(\mathcal {S})\).

The construction described in Proposition 1 can be realized not only in \(\mathbb R^2\), but also in the case of general Banach spaces, or even locally convex spaces and without the claiming of central symmetry. The details are to be found in many papers, for example in [2].

We will use in the sequel the diangles defined above. Let us denote by \(\mathcal {D}\) the set of all diangles. Clearly this set is closed with respect to the multiplication by positive reals but it is not a subcone of the cone \(\mathcal {S}\). However we can consider the subcone \(\mathcal {W}\) generated by all diangles understood as the smaller cone containing \(\mathcal {D}\). It is easy to check, that \(W\in \mathcal {W}\) if and only if W can be written in the form:
$$\begin{aligned} W= \sum _{k=1}^{n}I_k , \end{aligned}$$
where \(I_k\in \mathcal {D}\). In other words \(\mathcal {W}\) is the set of all centrally symmetric polygons. One can easily check, that for each \(W\in \mathcal {W}\) the representation (12) is unique. In other words the set of diangles is linearly independent for finite sums.

2.2 Width Functionals

We will use also the so-called width functionals on \(\mathcal {S}\). Suppose that \(f:\mathbb R^{2}\longrightarrow \mathbb R\) is a linear functional on \(\mathbb R^2 \). For \(V\in \mathcal {S}\) we set:
$$\begin{aligned} \overline{f}(V)=\sup \left\{ f(v):v\in V\right\} \end{aligned}$$
It is proved in [6], that the map
$$\begin{aligned} \overline{f}:{\mathcal {S}}\ni V\longrightarrow \overline{f}(V)\in [0,\infty ) \end{aligned}$$
is a homomorphism of cones. This means, that \(\overline{f}(U+V)=\overline{f}(U)+\overline{f}(V)\) and \(\overline{f}(\alpha \cdot V)=\alpha \cdot \overline{f}(V).\) The linearity of the functional \(\overline{f}\) will be used in many variants in connection with the important notion of the width of a set with respect to a given direction.

Let \(V\in \mathcal {S}\) be a set and let k be a straight line containing the origin.

Definition 2

The width of V with respect to k—denoted by \(\overline{V}(k)\)—is the lower bound of all numbers \(\varrho (k_1,k_2)\) where \(k_1\) and \(k_2\) are the straight lines parallel to k, V lies between \(k_1\) and \(k_2\) and \(\varrho (k_1,k_2)\) is the distance of lines \(k_1\) and \(k_2\).

For each line k there exists a unique (up to the sign) linear functional \(f_k\), such that \(||f_k|| = 1\) and k is the kernel of \(f_k\). Clearly we have
$$\begin{aligned} \overline{V}(k)= 2\cdot \overline{f_k}(V) \end{aligned}$$
and in consequence we have the formula:
$$\begin{aligned} \overline{U+V}(k)=\overline{U}(k)+\overline{V}(k). \end{aligned}$$
In particular, if \(I\in \mathcal {D}\) is a non-trivial diangle, then I determines the unique straight line k, such that \(I\subset k\), and hence we may speak of the width of the set V with respect to the diangle I, which will be denoted by \(\overline{V}(I)\).

2.3 Formulation of the Main Problem

Let us consider now some natural function on the cone \({\mathcal {S}}\). Namely, let \(m: {{\mathcal {B}}_2}\longrightarrow \mathbb R\) be the two-dimensional Lebesgue measure in the plane defined, for simplicity, on the \(\sigma \)-algebra \({{\mathcal {B}}_2}\) of Borel sets in \({\mathbb R}^2\). Since \({\mathcal {S}}\subset {{\mathcal {B}}_2}\), then we can consider the restriction \(m|_{\mathcal {S}}\), i.e. the map
$$\begin{aligned} m|_{\mathcal {S}}:{\mathcal {S}}\longrightarrow [0,\infty ). \end{aligned}$$
Since in \({ \mathcal {S}}\) we have the structure of a vector cone described above, then it is natural to expect, that \(m|_{\mathcal {S}}\) has some algebraic properties with respect to the algebraic operations in \({\mathcal {S}}\). In particular we have: \(m(\lambda \cdot U)= \lambda ^2\cdot m(U)\). Much more informations gives the very well known Steiner formula. Namely for \(U\in {\mathcal {S}}\), for \(B=B(0,1)\)—(where B is the unit disc) and for \(\lambda \in [0,\infty )\) the Steiner formula says that:
$$\begin{aligned} m(U+\lambda B)= m(U) + \lambda \cdot o(U) + {\lambda }^2 \cdot m(B), \end{aligned}$$
where o(U) is the perimeter of the convex set U. Since, as we have observed above, the cone \({\mathcal {S}}\) is a subcone of the covering space \(X_{\mathcal {S}}\), it is natural to expect that \(m|_{\mathcal {S}}\) is a restriction of some “good” function \(m^{*}:X_{\mathcal {S}}\longrightarrow \mathbb R \). This is true. Namely we will prove that:

Theorem 3

With the notations as above, there exists a unique polynomial of the degree two, \(m^{*}:X_{\mathcal {S}}\longrightarrow \mathbb R\), such that, \(m^{*}|_{\mathcal {S}} = m|_{\mathcal {S}}\).

The proof of this theorem is perhaps not too difficult, but it is rather long, since we must construct some unknown object, namely the polynomial \(m^{*}\). The crucial point is the proof of the fact, that the necessary formula for \(m^{*}\) does not depend on the choice of the representatives. We will do the proof in a number of steps.

2.4 Construction of the Extension of m onto \(X_{\mathcal {S}}\)

We shall start by defining two functions: \(\Psi :{\mathcal {S}}\times {\mathcal {S}}\longrightarrow \mathbb R\) and \({\mathcal {M}}:{\mathcal {S}}\times {\mathcal {S}}\longrightarrow \mathbb R\) by the following formulas. Let \(U,V,A,B\in \mathcal {S}\). Define
$$\begin{aligned} \Psi (U,V)= 2m(U)+2m(V)-m(U+V) \end{aligned}$$
$$\begin{aligned} {\mathcal {M}}(A,B) = \frac{m(A+B)-m(A)-m(B)}{2} \end{aligned}$$

Proposition 4

The following conditions are equivalent:
  1. (a)

    For all \(U,V,W \in \mathcal {S}\) we have: \(\Psi (U+W,V+W)=\Psi (U,V);\)

  2. (b)

    For all \(A_1,A_2,B\in \mathcal {S}\) we have: \({\mathcal {M}}(A_1+A_2,B)={\mathcal {M}}(A_1,B)+{\mathcal {M}}(A_2,B);\)

  3. (c)

    For all \(U,V,P,Q \in \mathcal {S}\) we have: \((U,V)\diamond (P,Q)\Longrightarrow \Psi (U,V)=\Psi (P,Q).\)



Since \((U+W,V+W)\diamond (U,V)\) then (c) \(\Longrightarrow \) (a) To prove the implication (a) \(\Longrightarrow \) (b) let us fix sets \(A_1,A_2,B\in \mathcal {S}\).

Let us set: \(L=2\cdot \mathcal {M}(A_1+A_2,B)\) and \(P=2\cdot \left( \mathcal {M}(A_1,B)+ \mathcal {M}(A_2,B)\right) .\) It follows from (18) that:
$$\begin{aligned} L=2\cdot {\mathcal {M}}(A_1+A_2,B)=m(A_1+A_2+B)-m(A_1+A_2)-m(B) \end{aligned}$$
$$\begin{aligned} P= & {} 2({\mathcal {M}}(A_1,B)+ {\mathcal {M}}(A_2,B))=m(A_1+B)+m(A_2+B)\\&-\,(m(A_1)+m(A_2))-2m(B). \end{aligned}$$
$$\begin{aligned} L= & {} P \Longleftrightarrow 2L=2P \Longleftrightarrow 2m(A_1+A_2+B) - 2m(A_1+A_2) - 2m(B)\\= & {} 2m(A_1+B)+2m(A_2+B) -(2m(A_1)+2m(A_2))- 4m(B)\\&\quad \Longleftrightarrow 2m(A_1+A_2+B) + [2m(A_1) + 2m(A_2) - m(A_1+A_2)]- m(A_1+A_2)\\= & {} [2m(A_1+B) + 2m(A_2+B) - m(A_1+A_2+2B)] \\&+\, m(A_1+A_2+2B)- 2m(B). \end{aligned}$$
Now we use Definition (17) and we obtain the equivalence:
$$\begin{aligned} L= & {} P\Longleftrightarrow 2m(A_1+A_2+B)+ \Psi (A_1,A_2) - m(A_1+A_2)\\= & {} \Psi (A_1+B,A_2+B)+ m(A_1+A_2+2B) - 2m(B). \end{aligned}$$
Now we use the assumed condition (a) and we have the equivalence:
$$\begin{aligned} L=P \Longleftrightarrow 2m(A_1+A_2 +B)+2m(B)-m(A_1+A_2+2B)=m(A_1+A_2). \end{aligned}$$
But obviously we have:
$$\begin{aligned}&2m(A_1+A_2 +B)+2m(B)-m(A_1+A_2+2B)\\&\quad = 2m(A_1+A_2+B) + 2m(\left\{ \theta \right\} + B) - m(A_1+A_2+ \left\{ \theta \right\} +2B)\\&\quad = \Psi (A_1+A_2+B,\left\{ \theta \right\} +B) = \Psi (A_1+A_2,\left\{ \theta \right\} ). \end{aligned}$$
The last equality is once more the consequence of (a).
Using Definition (17) we have
$$\begin{aligned} \Psi (A_1+A_2,\left\{ \theta \right\} )= 2m(A_1+A_2)+ 2m(\left\{ \theta \right\} )- m(A_1+A_2+\left\{ \theta \right\} )= m(A_1+A_2). \end{aligned}$$
This is exactly the right hand side of equality (19), which ends the proof of the implication (a) \(\Longrightarrow \) (b).

Before proving (b) \(\Longrightarrow \) (c) we shall observe some properties of the function \(\mathcal {M}\). The condition (b) means, that \(\mathcal {M}\) is additive with respect to the first variable. But \(\mathcal {M}\) is obviously symmetric, then \(\mathcal {M}\) is additive with respect to both variables separately.

Now we shall prove, that \(\mathcal {M}\) is positively homogenous (clearly with respect to both variables separately). Indeed, let us fix two sets A and B from \(\mathcal {S}\). We shall prove that for each \(\lambda \ge 0\) there is:
$$\begin{aligned} {\mathcal {M}}(\lambda A,B)= \lambda \cdot {\mathcal {M}}(A,B). \end{aligned}$$
To prove (20) let us consider the function \(u(t)= {\mathcal {M}}(tA,B)\) defined on the interval \([0,\infty )\). It is easy to check, that the condition (b) implies that u satisfies the so-called Cauchy functional equation, i.e.
$$\begin{aligned} u(t+s)= u(t)+u(s). \end{aligned}$$
Indeed, we have:
$$\begin{aligned} u(t+s)= & {} {\mathcal {M}}((t+s)A,B)= {\mathcal {M}}(tA+sA,B)= {\mathcal {M}}(tA,B)\\&+\,{\mathcal {M}}(sA,B)=u(t)+u(s). \end{aligned}$$
Moreover the function u is locally bounded since if \(\alpha = diam(A)\), \(\beta = diam(B)\), then the set \(tA+B\) is contained in the ball with the radius \(t\alpha +\beta \). It follows from the known theorem on Cauchy equation, that \(u(t)=tu(1)\). This means that \(\mathcal {M}\) is positively homogenous.

Now we shall prove the implication (b) \(\Longrightarrow \) (c). Let us fix two pairs (UV) and (XY) such that \((U,V)\diamond (X,Y)\). We will check that \(\Psi (U,V)=\Psi (X,Y)\).

From the definition of the relation \(\diamond \) we have: \(X+V=Y+U\). Adding Y we obtain \(X+Y+V = 2Y+U\). Now we add V to both sides of the last equality and we obtain: \(X+Y+2V=2Y+U+V\). Thus \(m(X+Y+2V)=m(U+V+2Y)\). Now we use the definition of the function \(\mathcal {M}\) and we obtain:
$$\begin{aligned} m(X+Y)+2{\mathcal {M}}(X+Y,2V)+m(2V)= m(U+V)+2{\mathcal {M}}(U+V,2Y)+m(2Y)\qquad \end{aligned}$$
and by the homogeneity:
$$\begin{aligned} m(X+Y)+4{\mathcal {M}}(X+Y,V)+4m(V)=m(U+V)+4{\mathcal {M}}(U+V,Y)+4m(Y).\qquad \qquad \qquad \end{aligned}$$
After an analogous sequence of transformations (replacing Y by X and V by U) we obtain the equality:
$$\begin{aligned} m(X+Y)+4{\mathcal {M}}(X+Y,U)+4m(U)= m(U+V)+4{\mathcal {M}}(U+V,X)+4m(X).\qquad \qquad \qquad \end{aligned}$$
Adding (22) and (23) and using the bilinearity of \(\mathcal {M}\) we obtain the equality:
$$\begin{aligned} 2\Psi (U,V) - 2\Psi (X,Y)= & {} 2[2m(U)+2m(V)-m(U+V)]\\&-\,2[2m(X) + 2m(Y) -m(X+Y)]\\= & {} 4{\mathcal {M}}(U+V,X+Y)- 4{\mathcal {M}}(X+Y,U+V)=0. \end{aligned}$$
The last equality implies the equality \(\Psi (X,Y) = \Psi (U,V)\) and this ends the proof of the Proposition 4. \(\square \)

To complete Proposition 4 we shall prove that the condition (a) from this Proposition is true. More exactly we shall prove the following

Proposition 5

Let \(X\in {\mathcal {S}}\ni Y, U\in {\mathcal {S}}.\) Then
$$\begin{aligned} 2m(X+U)+2m(Y+U)-m(X+Y+2U)= 2m(X)+2m(Y)-m(X+Y). \end{aligned}$$
(i.e. \(\Psi (X+U,Y+U)=\Psi (X,Y)\)).


First we will check, that the equality (24) is true for the sets U having the form \(U=\sum _{1}^{k}I_j\), i.e. for the polygons from the cone \(\mathcal {W}\). We apply the induction with respect to k, i.e. the number of diangles representing U.

Proof for \(k=1\). Let \(X\in {\mathcal {S}}\ni Y\) be arbitrary elements from the cone \({\mathcal {S}}\) and let \(I=I(\mathbf{{v}},d)\) be a diangle. We have
$$\begin{aligned} L= & {} \Psi (X+I,Y+I) = 2m(X+I) + 2m(Y+I)-m(X+Y+2I) \\= & {} 2m(X) + 2m(I) + 2\cdot 2\cdot \overline{X}(I)\cdot d \\&+\,2m(Y) + 2m(I) + 2\cdot 2\cdot \overline{Y}(I)\cdot d - m(X+Y) - m(2I)\\&-\,2\cdot 2\cdot \overline{X+Y}(I)\cdot d. \end{aligned}$$
Since \(m(I)=0\) and \( \overline{X+Y}(I)= \overline{X}(I)+ \overline{Y}(I)\) (observe, that the direction of the diangle 2I is the same as the direction of the diangle I), then \(L= 2m(X)+2m(Y)-m(X+Y)= \Psi (X,Y)\), which ends the proof for \(k=1\). Let us notice, that we used the Cavaleri principle. More exactly the equality \(m(X+I)= m(X)+2\cdot \overline{X}\cdot d\) and similar equalities results from the Cavalieri principle.
Induction step. Assume, that our theorem is true for all sets X, Y and for all set \(U'\) such that \(U'=\sum _{1}^{k-1} d_j\cdot I_j\). Fix now some sets X and Y and a set \(U=\sum _{1}^{k} d_j\cdot I_j\). Let us denote \(X'= X+\sum _{1}^{k-1} d_j\cdot I_j\), \(Y'=Y+\sum _{1}^{k-1} d_j\cdot I_j\) and \(I(\mathbf{{v}}_k,d_k)= I(\mathbf{{v}},d)=I\). Then
$$\begin{aligned}&2m(X+U)+2m(Y+U)-m(X+Y+2U) = 2m(X'+I)+2m(Y'+I)-m(X'+Y'+2I)\\&\quad = {\text{(since } \text{ our } \text{ theorem } \text{ is } \text{ true } \text{ for }} \;k=1 {\text{) }}\\&\quad = 2m(X')+2m(Y')- m(X'+Y') = 2m(X+U')+2m(Y+U')-m(X+Y+2U')\\&\quad = \mathrm{(inductive\;assumption)} =2m(X)+2m(Y)-m(X+Y). \end{aligned}$$
Hence we have proved that Eq. (24) is true for all polygons \(U\in {\mathcal {W}}\). To end the proof of the Proposition 5 for fixed sets XYU we construct a sequence of polygons \(U_k\) convergent to the set U in the sense of Hausdorff distance. Since the measure m (in \(\mathcal {S}\)) is continuous with respect to the Hausorff distance, then it is sufficient to pass to the limit in the sequence of equalities:
$$\begin{aligned} 2m(X+U_k)+2m(Y+U_k)-m(X+Y+2U_k)= 2m(X)+2m(Y)-m(X+Y). \end{aligned}$$
\(\square \)

2.5 A Bilinear Form

Let us consider a function:
$$\begin{aligned} \tilde{M}:(\mathcal {{S}}\times {\mathcal {S}})\times ({\mathcal {S}}\times {\mathcal {S}}) \longrightarrow \mathbb R \end{aligned}$$
defined by the formula:
$$\begin{aligned} \widetilde{M}((U,V),(P,Q))= \frac{(m(U+P)+m(V+Q)-m(U+Q)-m(V+P))}{2} \end{aligned}$$
We shall prove that:

Proposition 6

If \((U,V)\diamond (U_1,V_1)\) and \((P,Q)\diamond (P_1,Q_1)\) then
$$\begin{aligned} \widetilde{M}((U,V),(P,Q))= \widetilde{M}((U_1,V_1),(P_1,Q_1)) \end{aligned}$$


First we shall establish the relations between the function \(\mathcal {M}\) defined above by (18) and the function \(\widetilde{M}\) defined by (26).

We check that:
$$\begin{aligned} \widetilde{M}(A,\left\{ \theta \right\} ),(B,\left\{ \theta \right\} )= {\mathcal {M}}(A,B), \end{aligned}$$
$$\begin{aligned} \widetilde{M}(\left\{ \theta \right\} ,A),(\left\{ \theta \right\} ,B)= {\mathcal {M}}(A,B). \end{aligned}$$
$$\begin{aligned} \widetilde{M}(A,\left\{ \theta \right\} ),(\left\{ \theta \right\} ,B)= -{\mathcal {M}}(A,B), \end{aligned}$$
$$\begin{aligned} \widetilde{M}(\left\{ \theta \right\} ,A),(B,\left\{ \theta \right\} )= -{\mathcal {M}}(A,B). \end{aligned}$$
It is easy to check that:
$$\begin{aligned} \widetilde{M}((U,V),(P,Q))= {\mathcal {M}}(U,P)+{\mathcal {M}}(V,Q)-{\mathcal {M}}(U,Q)-{\mathcal {M}}(V,P) \end{aligned}$$
Now we observe that:

\(\square \)

Proposition 7

Suppose that \((P,Q)\diamond (P_1,Q_1)\). Then:
$$\begin{aligned} {\mathcal {M}}(U,P)- {\mathcal {M}}(U,Q)= {\mathcal {M}}(U,P_1)- {\mathcal {M}}(U,Q_1) \end{aligned}$$


Indeed (28) is equivalent to the equality:
$$\begin{aligned} {\mathcal {M}}(U,P)+ {\mathcal {M}}(U,Q_1)= {\mathcal {M}}(U,P_1)+ {\mathcal {M}}(U,Q). \end{aligned}$$
But \({\mathcal {M}}\) is—as we have proved—linear with respect to each variable separately, hence the last equality is equivalent to
$$\begin{aligned} {\mathcal {M}}(U,P+Q_1)= {\mathcal {M}}(U,P_1+Q). \end{aligned}$$
The last equality is true because \((P,Q)\diamond (P_1,Q_1)\).
Analogously, we have:
$$\begin{aligned} {\mathcal {M}}(V,P)- {\mathcal {M}}(V,Q)= {\mathcal {M}}(V,P_1)- {\mathcal {M}}(V,Q_1). \end{aligned}$$
In consequence we obtain the equality:
$$\begin{aligned} \widetilde{M}((U,V),(P,Q))=\widetilde{M}((U,V),(P_1,Q_1)) \end{aligned}$$
and by symmetry
$$\begin{aligned} \widetilde{M}((U,V),(P_1,Q_1))= \widetilde{M}((U_1,V_1),(P_1,Q_1)). \end{aligned}$$
\(\square \)

Hence Proposition 6 is proved. It means that the function \(\widetilde{M}\) is well defined as a function on the vector space \(X_{\mathcal {S}}\).

It follows from Formulas (27) that \(\widetilde{M}\) is additive with respect to each variable separately. Moreover, as we have observed earlier, the function \({\mathcal {M}}\) is homogenous for nonnegative scalars. Then, using (27) we easily check, that \(\widetilde{M}\) is homogenous also for negative reals. In consequence we may state, that \(\widetilde{M}\) is a bilinear form on \(X_{\mathcal {S}}\).

In the last step we check that:
$$\begin{aligned} \widetilde{M}([U,V],[U,V]) = \Psi ([U,V]) := m^{*}([U,V]) \end{aligned}$$
which means, that the Lebesgue measure in the plane—more exactly the function \(m|_{\mathcal {S}}\)—can be extended to a polynomial on \(X_{\mathcal {S}}\).
For the uniqueness it is sufficient to observe, that when w is a homogenous polynomial then:
$$\begin{aligned} w(x+y)+w(x-y)= 2w(x)+2w(y). \end{aligned}$$

3 Part II

A Generalization of the Isoperimetric Inequality.

Now, when we have the polynomial \(m^{*}\) and the bilinear form \(\widetilde{M}\), we may consider the problem of generalization of different properties of the measure m (classical) onto the measure \(m^{*}\) (generalized). In the sequel we will use for \(m^{*}\) the name measure and we will write m instead of \(m^{*}\).

In this section we shall present a generalization of the isoperimetric inequality. The classical isoperimetric inequality, as we have written above in Introduction (1), has the form:
$$\begin{aligned} o^{2}(U)\ge 4\pi \cdot m(U), \end{aligned}$$
where \(U\in {\mathcal {S}}\). Since the right hand side of (29) has now a sense for \([U,V]\in X_{\mathcal {S}}\), then if we want generalize (29) we must to generalize the perimeter from \(\mathcal {S}\) onto \(X_{\mathcal {S}}\).
This is not hard to do. Suppose that, as above, \(U\in {\mathcal {S}}\ni V\). Since U and V are, in particular convex and closed, then they have the perimeter. Let us denote the perimeter of the set \(U\in {\mathcal {S}}\) by o(U). It is known, that the correspondence
$$\begin{aligned} o:X_{\mathcal {S}}\ni U \longrightarrow o(U)\in \mathbb R \end{aligned}$$
is linear on the cone \(\mathcal {S}\). This is a consequence of the Steiner formula. Namely it is geometrically evident, that:
$$\begin{aligned} o(U) = \lim _{t\rightarrow 0}\frac{m(U+tB)- m(U) - m(B)t^2}{t}, \end{aligned}$$
where B is a unit disk. Since m is a homogenous polynomial of the second degree, generated by the the form \(\widetilde{M}\) (bilinear and symmetric) then, it is easy to check, that:
$$\begin{aligned} o(U)= 2\cdot \widetilde{M}([U,\left\{ \theta \right\} ];[B,\left\{ \theta \right\} ]) \end{aligned}$$
This leads to the following:

Proposition 8

The function
$$\begin{aligned} o:X_{\mathcal {S}}\ni [(U,V)]\longrightarrow o(U)-o(V)\in \mathbb {R} \end{aligned}$$
is well defined (i.e. does not depend on the representative of the equivalence class with respect to \(\diamond \)) and is a linear functional on \(X_{\mathcal {S}}\).


We set for \(x=[U,V]\in X_{\mathcal {S}}\)
$$\begin{aligned} o(x)=o([U,V])=2\cdot \widetilde{M}\left( [U,V];[B,\left\{ \theta \right\} ]\right) \end{aligned}$$
Then we have
$$\begin{aligned} o([U,V]= & {} 2\cdot \widetilde{M}\left( [U,\left\{ \theta \right\} ]+[\left\{ \theta \right\} ,V];[B,\left\{ \theta \right\} ]\right) \\= & {} 2\cdot \widetilde{M}\left( [U,\left\{ \theta \right\} ];[B,\left\{ \theta \right\} ]\right) + 2\cdot \widetilde{M}\left( [\left\{ \theta \right\} ,V];[B,\left\{ \theta \right\} ]\right) \\= & {} 2\cdot \widetilde{M}\left( [U,\left\{ \theta \right\} ];[B,\left\{ \theta \right\} ]\right) -2\cdot \widetilde{M}\left( [V,\left\{ \theta \right\} ];[B,\left\{ \theta \right\} ]\right) =o(U)-o(V). \end{aligned}$$
This means, that the definition of the perimeter \(x\longrightarrow o(x)\) does not depend on the choice of representative and setting \([[B,\left\{ \theta \right\} ]=B\) we may write: \(o(x)=2\cdot \widetilde{M}(x,B)\). Thus \(o:X_{\mathcal {S}}\longrightarrow \mathbb {R}\) is a linear functional on \(X_{\mathcal {S}}\). \(\square \)

Now we are ready to formulate the following generalization of the isoperimetric inequality:

Theorem 9

For each vector \(x=[(U,V)]\in X_{\mathcal {S}}\) the following inequality holds:
$$\begin{aligned} o^2([(U,V)])\ge 4 \pi \cdot m([(U,V)]) \end{aligned}$$
(let us remember that here and in the sequel we write m instead of \(m^{*}\)).

Let us observe that (34) agree with (2). Let us observe also, that this is in fact a generalization of the classical isoperimetric inequality, since when V is trivial (\(V = \left\{ \theta \right\} \)) then (34) gives (29).

The proof of Theorem 9 is not quite trivial since now the right hand side of (34) can be negative. In other words, inequality (34) does not remain true, when one replaces m([(UV)] by its absolute value (i.e. by |m([(UV)]|). We will need a number of lemmas.

Lemma 10

Suppose that the sets \(U\in {\mathcal {S}}\ni V\) and suppose that \(V\in \mathcal {W}\) is represented as follows:
$$\begin{aligned} V=\sum _{j=1}^{k}I_j, \end{aligned}$$
where \(I_j= I(\mathbf{{v_j}},d_j)\) and \(d_j=d(I_j)\). Then we have:
$$\begin{aligned} m(U+V) = m(U) + m(V) + 2\cdot \sum _{j=1}^{k}\overline{U}(I_j)\cdot d(I_j). \end{aligned}$$
where \(\overline{U}\) is defined by (15).


We shall prove this formula inductively with respect to the number k of diangles in the representation of V. For \(k=1\), using the Cavalieri principle, we have:
$$\begin{aligned} m(U+I)= m(U) + 2\cdot \overline{U}(I) \cdot dI. \end{aligned}$$
But in our case we have \(m(V)=m(I)=0\), hence
$$\begin{aligned} m(U+I)= m(U) +m(I) + 2\cdot \overline{U}(I)\cdot dI \end{aligned}$$
and this means that the Formula (35) is true for \(k=1\).
Suppose now that (35) is true for each set \(U\in \mathcal {S}\) and for each 2k-angle \(V=\sum _{j=1}^{k}I_j\). Let us fix an arbitrary \(U\in \mathcal {S}\) and \(2(k+1)\)-angle \(V=\sum _{j=1}^{k+1}I_j\). We have:
$$\begin{aligned} m(U+V)= m\left( U+\sum _{j=1}^{k+1}I_j\right) = m\left( \left( U+ \sum _{j=1}^{k}I_j\right) + I_{k+1}\right) \end{aligned}$$
Now we apply (10) in the case \(k=1\) for \(U'=U+ \sum _{j=1}^{k}I_j\) and \(V=I_{k+1}\) and we obtain that the above equals:
$$\begin{aligned} m\left( U + \sum _{j=1}^{k}I_j\right) + 2 \cdot \left( \overline{U+ \sum _{j=1}^{k}I_j}\right) (I_{k+1})\cdot dI_{k+1} \end{aligned}$$
Now we apply (10) for k (inductive assumption ) and the additivity of the width of a set [Formula (15)] with respect to the addition in \(\mathcal {S}\) and we obtain that the above equals:
$$\begin{aligned}&m(U) + m\left( \sum _{j=1}^{k}I_j\right) + 2\cdot \sum _{j=1}^{k}\overline{U}(I_j)\cdot dI_j+ 2\cdot \overline{U}(I_{k+1})\cdot dI_{k+1} \\&\quad +\, 2\cdot \overline{\left( \sum _{j=1}^{k}I_j\right) }(I_{k+1})\cdot dI_{k+1}. \end{aligned}$$
Using once more the property (10) in the case \(k=1\) applied to \(U= {\sum _{j=1}^{k}I_j}\) and \(I=I_{k+1}\) we get that the above equals:
$$\begin{aligned} m\left( \sum _{j=1}^{k}I_j\right) + 2\cdot \overline{\left( \sum _{j=1}^{k}I_j\right) }(I_{k+1})\cdot dI_{k+1}= m\left( \sum _{j=1}^{k+1}I_j\right) = m(V). \end{aligned}$$
Finally this equals
$$\begin{aligned} = m(U) + m(V) + 2\cdot \sum _{j=1}^{k+1}\overline{U}(I_j)dI_j . \end{aligned}$$
This ends the proof of the Lemma 10. \(\square \)

3.1 Polygons in Singular Position

As we observed above each polygon \(W\in {\mathcal {W}}\subset \mathcal {S}\) is a Minkowski sum of a number of diangles. Each diangle I has a form \(I=I(\mathbf{{v}},d)=[-d,d]\cdot \mathbf{{v}}\). The direction of the vector \(\mathbf{{v}}\) will be called the direction of the diangle I. We will say that two diangles are parallel, when they have the same direction.

Definition 11

Let us consider two polygons from \(\mathcal {W}\) and let us denote them \(U=\sum _{i=1}^{n}J_i\) and \(V=\sum _{j=1}^{k}I_j\). We will say that U and V are in singular position, when there exists \(i_0\le n\) and \(j_0\le k\), such that the diangles \(J_{i_0}\) and \(I_{j_0}\) are parallel.

In other words the singular position means that at least one side of the polygon U is parallel to a side of the polygon V.

Now we will define a function, which will be called a rotation function. For a given angle \(\varphi \in [0,\pi )\) we denote by \(O(\varphi )\) the rotation of the plane \(\mathbb R^2\) determined by the angle \(\varphi \). The image of the set \(W\in \mathcal {S}\) by \(O(\varphi )\), i.e. the set \(O(\varphi )(W)\) will be denoted by \(W^{\varphi }\). Since the rotations are linear, they preserve the Minkowski sums. In particular, if \(V=\sum _{j=1}^{k}I_j\), then
$$\begin{aligned} V^{\varphi }= \sum _{j=1}^{k}I_j^{\varphi }. \end{aligned}$$
Let us fix now two polygons \(U=\sum _{i=1}^{n}J_i\) and \(V=\sum _{j=1}^{k}I_j\). We define a function (a rotation function): (\(\varphi \in [0,\pi ]\))
$$\begin{aligned} E(\varphi ) = m(U+V^{\varphi }) \end{aligned}$$
Since V is centrally symmetric, then we have \(V^{\varphi +\pi } = V^{\varphi }\). It follows from this equality, that the function E can be considered, if necessary, as a periodic function on whole \(\mathbb R\). It is also clear, that for fixed U and V the function
$$\begin{aligned}\mathbb R\ni \varphi \longrightarrow E(\varphi )\in \mathbb R\end{aligned}$$
is continuous.

When the polygon \(V^{\varphi }\) changes its position with \(\varphi \) and the polygon U remains fixed, then, in general for more than one \(\varphi \) the polygons U and \(V^{\varphi }\) are in singular position.

We shall prove the following lemma.

Lemma 12

If \({\varphi }_0\in [0,\pi ]\) is a point, in which the function E attains its minimum, then the pair \((U,V^{\varphi _0})\) is in singular position.

It follows from (35) and (37) that
$$\begin{aligned} E(\varphi )= m(U+V^{\varphi })=m(U)+ m(V^{\varphi }) + 2\cdot \sum _{j=1}^{k}\overline{U}({I_j}^{\varphi })\cdot d({I_j}^{\varphi }). \end{aligned}$$
Hence, if \(E(\varphi _0)\le E(\varphi )\) we obtain the inequality:
$$\begin{aligned} E({\varphi }_0)= & {} m(U)+ m(V^{\varphi _0}) + 2\cdot \sum _{j=1}^{k}\overline{U}(I_{j}^{\varphi _0})\cdot d(I_{j}^{\varphi _0}) \\\le & {} E(\varphi )= m(U+V^{\varphi })=m(U)+ m(V^{\varphi }) + 2\cdot \sum _{j=1}^{k}\overline{U}(I_{j}^{\varphi })\cdot d(I_{j}^{\varphi }). \end{aligned}$$
Since \(m(V^{\varphi _0}) = m(V^{\varphi })\), (the measure m is invariant under rotations) and since \(dI^{\varphi }\) does not depend on \(\varphi \), we conclude that the inequality \(E(\varphi _0)\le E(\varphi )\) is equivalent to the inequality:
$$\begin{aligned} \sum _{j=1}^{k}\overline{U}(I_{j}^{\varphi _0})\cdot d(I_{j})\le \sum _{j=1}^{k}\overline{U}(I_{j}^{\varphi })\cdot d(I_{j}). \end{aligned}$$
Let us denote
$$\begin{aligned} F(\varphi )= \sum _{j=1}^{k}\overline{U}(I_{j}^{\varphi })\cdot d(I_{j}). \end{aligned}$$
Clearly, since the difference of \(F(\varphi )\) and \(E(\varphi )\) is constant, F attains its absolute minimum at the same point as the function E, i.e. at \(\varphi _0\). Hence it is sufficient to show an equivalent form of Lemma 12. Namely:

Proposition 13

If F attains its absolute minimum at \(\varphi _0\), then the pair \((U,V^{\varphi _0})\) is in singular position.

To prove Proposition 13 we will need some observations concerning the so-called interval-wise concave functions.

3.2 Interval-Wise Concave Functions

Consider a function \(H:\mathbb R\longrightarrow \mathbb R\), which is continuous and periodic with period \(\pi \). Hence, in particular \(H(0)=H(\pi )\).

Definition 14

We will say that a function H as above is interval-wise concave when there exists a sequence \((\alpha _k)_{k=0}^{n}\) such that:
  1. (i)

    \(0=\alpha _0<\alpha _1<\cdots <\alpha _k=\pi \),

  2. (ii)

    For each \(0\le i \le k-1\) the restriction of the function H to the interval \([\alpha _i,\alpha _{i+1}]\) is concave.


We will need the following simple properties of the interval-wise concave functions:

Proposition 15

(a) If H is interval-wise concave and \(\lambda \) is a non-negative scalar, then \(\lambda \cdot H\) is interval-wise concave. (b) The sum of two (or a finite number) interval-wise concave functions is an interval-wise concave function.


The property (a) is obvious, since the product of a concave function by a non-negative real is still concave.

In the proof of the property (b) we will use the language from the Riemman integral theory. We will call the nets the sequences of the type \(0=\alpha _0<\alpha _1<\cdots <\alpha _k=\pi \). If we have two such nets, say \(\alpha =(\alpha _i)_{0}^{k}\) and \(\beta =(\beta _j)_{0}^{m}\), then we define a net \(\gamma =(\gamma _l)_{0}^{s}\) as the union of points of the nets \(\alpha \) and \(\beta \) with natural ordering. In such a case we will say, that \(\gamma \) is finer than \(\alpha \) (and clearly also than \(\beta \)) or, equivalently, that \(\alpha \) is a subnet of the net gamma. It is also clear, that if a function H is concave in each interval of a net \(\alpha \) and \(\gamma \) is finer than \(\alpha \), then H is concave in each interval of the net \(\gamma \). Hence for each interval-wise concave function H there exists a net \(\alpha _H\), which is maximal with respect to H in the following sense: H is concave in each subinterval of the net \(\alpha _H\) and if H is concave in the subintervals of a net \(\gamma \), then \(\gamma \) is finer than \(\alpha _H\).

Suppose now, that we have two interval-wise concave functions H and G. Suppose, that the function H is concave in the intervals \([\alpha _i,\alpha _{i+1}]\) of a net \(\alpha =(\alpha _i)_{0}^{k}\) and the function G is concave in the intervals \([\beta _j,\beta _{j+1}]\) of a certain net \(\beta =(\beta _j)_{0}^{m}\). Let \(\gamma =(\gamma _l)_{0}^{s}\) be the union of the nets \(\alpha \) and \(\beta \). Each subinterval \([\gamma _l,\gamma _{l+1}]\) is the intersection of the subintervals of the type \([\alpha _i,\alpha _{i+1}]\) and \([\beta _j,\beta _{j+1}]\). Hence for each \(r\le s\) the functions \(H|[\gamma _r,\gamma _{r+1}]\) and \(G|[\gamma _r,\gamma _{r+1}]\) are both concave. In consequence the function \(H+G\) is concave in the subintervals of the net \(\gamma =(\gamma _l)_{0}^{s}\), and this ends the proof of the property (b). \(\square \)

Let us observe that:

Proposition 16

Let J and I be two diangles. Then the function \(\overline{J_{I^\varphi }}\) considered on the interval \([-\frac{\pi }{2},\frac{\pi }{2}]\) is interval-wise concave.


The property to be interval-wise concave is invariant with respect to the rotations of the set J and with respect to the scalar multiplication. Hence without loss of generality we may assume that \(J=[-1,1]\cdot (1,0)\). For the same reason we may assume, that \(I=[-1,1]\cdot (0,1)\). In such a case, as it is easy to check, that
$$\begin{aligned} \overline{J_{I^\varphi }}=\overline{J_{I}}(\varphi ) = \cos (\varphi ). \end{aligned}$$
This ends the proof of Proposition 16 since the function cosinus is concave in the interval \([-\frac{\pi }{2},\frac{\pi }{2}]\). \(\square \)

Let us mention here, that for each function of the type \(\overline{J_{I^\varphi }}\) there exists a point \(\alpha \) such that \(\overline{J_{I^\varphi }}\) is concave in each interval \([-\frac{\pi }{2},\alpha ]\) and \([\alpha ,\frac{\pi }{2}]\) and is not concave in any neighbourhood of the point \(\alpha \) and \(\alpha \) is a point such that J is parallel to \(I^{\alpha }\).

Proposition 17

Let \(U=\sum _{k=1}^{m}J_k\) be a polygon from \(\mathcal {W}\) and let I be a diangle. Then the function \(\overline{U_{I^\varphi }}\) is interval-wise concave. In consequence the function F defined by the Formula (40) is interval-wise concave.


It follows from the properties proved above that
$$\begin{aligned} \overline{U_{I}}(\varphi ) = \overline{\sum _{k=1}^{m}({J_k})_{I}}(\varphi )= \sum _{k=1}^{m}\overline{({J_k})_I}(\varphi ). \end{aligned}$$
By Proposition 16 each of functions \(\overline{({J_k})_I}\) is interval-wise concave and this ends the proof of Proposition 17. \(\square \)

Now we are ready to prove Proposition 13 and at the same time Lemma 12.


Let \(\varphi _0\) be an argument, at which the function F attains its absolute minimum. Clearly, such a point exists, since the function F is continuous and periodic. Let \(\alpha =(\alpha _i)_{0}^{k}\) be a net such that F is concave in each interval \([\alpha _i, \alpha _{i+1}]\). Clearly, we may assume, that this net is maximal (i.e. \(\alpha = \alpha _F\)), which means, that F is not concave in any neighbourhood of the points \(\alpha _i\). Since a concave function cannot attain its minimum at the interior point of the interval in which it is defined, then there exists \(0<\alpha _i<\pi \) such that \(\alpha _i=\varphi _0\). But the ends of the intervals of the net \(\alpha =(\alpha _i)_{0}^{k}\) have such property, that one of diangles \(I_j\) is parallel to the straight line joining two successive vertexes of the polygon U, i.e. is parallel to a diangle \(J_i\). This means that U and V are in singular position. \(\square \)

3.3 The Proof of the Generalized Isoperimetric Inequality

Let us begin by the following observation.

Observation 18

Suppose that \(U\in {\mathcal {S}}\ni V\) are two polygons, where \(U=\sum _{i=1}^{n}J_i\) and \(V=\sum _{j=1}^{k}I_j\). Then there exists a pair of polygons \(U'\) and \(V'\) such that:
  1. (i)

    The perimeter of the pair (UV) equals to the perimeter of the pair \((U',V')\);

  2. (ii)

    The joint number of sides of the pair \((U',V')\), understood as the sum of the number of sides of \(U'\) and \(V'\), is strictly less that the joint number of sides of the pair (UV);

  3. (iii)

    The measure m([UV]) is less or equal than the measure \(m([U',V'])\).



To prove Observation 18 we set \(U'=U\) and \(V'= V^{\varphi }\), where the angle , \(\varphi \) is such, that the pair j \((U,V^{\varphi })\) is in singular position. More exactly, \(\varphi \) is such that the function F attains absolute minimum exactly at \(\varphi \). Now we see, that condition (i) is fulfilled since \(O(\varphi )\) is an isometry.

To prove (iii) we observe that
$$\begin{aligned} m([U,V]) = 2\cdot m(U) + 2\cdot m(V) - m(U+V) \end{aligned}$$
$$\begin{aligned} m([U',V'])= & {} 2\cdot m(U') + 2\cdot m(V') - m(U'+V')\\= & {} 2m(U) + 2m(V^{\varphi }) - m(U+V^{\varphi }). \end{aligned}$$
$$\begin{aligned} m([U,V])\le m([U',V'])\Longleftrightarrow m(U+V^{\varphi })\le m(U+V). \end{aligned}$$
$$\begin{aligned} m(U+V^{\varphi })= m(U) + m(V^{\varphi }) + 2\cdot F(\varphi ) \end{aligned}$$
$$\begin{aligned} m(U+V) = m(U) + m(V) + 2\cdot F(0). \end{aligned}$$
$$\begin{aligned} m(U+V^{\varphi })\le m(U+V)\Longleftrightarrow F(\varphi )\le F(0). \end{aligned}$$
The last inequality is true because of the choice of \(\varphi \).

Moreover we know that \(U'\) and \(V'\) have a pair of parallel sides. This means that there exists a polygon \(U''\) with \((2n-2)\)-angles and a diangle I generated by a unit vector such that \(U'= U''+ d_1\cdot I\), and there exists a polygon \(V''\) with \((2k-2)\)-angles, such that \(V'= V'' +d_2I\). Without loss of generality we may assume that \(d_2\le d_1\) since in the opposite case the argument is analogous. If \(d=d_1-d_2\) then the pair \((U',V')\) is equivalent to the pair \((U''+d\cdot I, V'')\). But the joint number of sides of this last pair is strictly less than the joint number of sides of the pair (UV). This ends the proof of ii) and in consequence the proof of Observation 18. \(\square \)

Observation 19

Let U and V be two polygons as in Observation 18. Then there exists a polygon W such that \(o([U,V])= o(W)\) and \(m([U,V])\le m(W)\).


We can continue the reduction—described in the proof of Observation 18—of the joint number of sides of the pair (UV), which preserves the perimeter and increases the measure to the moment, when one of successively constructed polygons became trivial. But in this case the last of constructed pairs of type \((U',V')\) will be equivalent to the pair \((W,\left\{ 0\right\} )\). This ends the proof of Observation 19. \(\square \)

Observation 20

For each pair of polygons UV from \(\mathcal {W}\) the following inequality holds:
$$\begin{aligned} (o(U)-o(V))^2\ge 4\pi \cdot m([U,V]). \end{aligned}$$


Let us fix a pair of polygons (UV). Using Observation 19 we choose a polygon W satisfying the properties formulated in Observation 19. Then we have:
$$\begin{aligned} (o(U)-o(V))^2 = o^2(W) \ge 4\pi \cdot m(W) \ge 4\pi \cdot m([U,V]). \end{aligned}$$
The inequality \(o^2(W) \ge 4\pi \cdot m(W)\) follows from the classical isoperimetric inequality. \(\square \)

Now we are able to finish the proof of the generalized isoperimetric inequality formulated in Theorem 9.


Let us fix a pair \((U,V)\in X_{\mathcal {S}}\). It is known, that polygons are dense in \(\mathcal {S}\) with respect to the Hausdorff distance (i.e. \(\mathcal {W}\) is dense in \(\mathcal {S}\).) We choose two sequences of polygons \((U_k)_0^{\infty }\) and \((V_k)_0^{\infty }\) such that \(U_k\longrightarrow U\) and \(V_k\longrightarrow V\) in the sense of Hausdorff metric. Moreover the functions of perimeter o and measure m are continuous with respect to the considered convergence [4]. Since, by Observation 20, the generalized isoperimetric inequality is true for polygons (pairs from \(\mathcal {W}\)), so we have the following sequence of inequalities:
$$\begin{aligned} o^{2}([U_k,V_k])\ge 4\pi m([U_k,V_k]). \end{aligned}$$
or more exactly
$$\begin{aligned} ((o(U_k)-o(V_k))^{2}\ge 4\pi (2m(U_k)+2m(V_k)-m(U_k+V_k)). \end{aligned}$$
Passing to the limit and using the independence of o and m on the choice of representatives, we obtain Theorem 9. \(\square \)

3.4 The Problem of Equality in the Generalized Isoperimetric Inequality

It is well known, that in the classical isoperimetric inequality (1)
$$\begin{aligned} o^2(U)\ge 4\pi m(U), \end{aligned}$$
where \(U\in { \mathcal {S}}\), the equality holds if and only if U is a disc. One may say equivalently that \(o^2(U) = 4\pi m(U)\) if and only if \(U = \lambda B\) where B is a unit disc and \(\lambda \) is a non-negative real. We shall prove an analogous result for the generalized isoperimetric inequality (2). Namely we have the following:

Theorem 21

The equality in the generalized isoperimetric inequality \(o^2(x) \ge 4\pi m(x)\) (which is valid, as we have proved above for \(x\in X_{\mathcal {S}}\)) holds if and only if x belongs to the one dimensional subspace generated by the unit disc B. In other words
$$\begin{aligned} o^2(x)=4\pi m(x) \end{aligned}$$
if and only if there exists a real (not necessarily positive) \(\lambda \in \mathbb R\) such that \(x=\lambda B\).

We start be recalling a well known result concerning the quadratic forms.

Lemma 22

Suppose that \(\varphi : X\longrightarrow \mathbb R\) is a quadratic form such that \(\varphi (x)\ge 0\) for each x, and let \(\phi :X\times X\longrightarrow \mathbb R\) be a bilinear, symmetric form, such that \(\phi (x,x)=\varphi (x)\). Then the following inequality (Schwarz inequality) holds:
$$\begin{aligned} |\phi (x,y)|\le \sqrt{\varphi (x)}\cdot \sqrt{\varphi (y)}. \end{aligned}$$
Let us consider (for \(x\in X_{\mathcal {S}}\)) the so called deficit term:
$$\begin{aligned} D(x)= o^2(x)-4\pi m(x). \end{aligned}$$
The function D is a quadratic form on \({X}_{\mathcal {S}}\) and by the generalized isoperimetric inequality we have \(D(x)\ge 0\). Let \(\varepsilon (x,y)\) be a bilinear, symmetric form generating D. It is easy to check, that for \(x=[U,V]\) and \(y=[P,Q]\) we have:
$$\begin{aligned} \varepsilon ([U,V];[P,Q])= & {} o([U,V])\cdot o([P,Q]) - 2\pi (m(U+P)\nonumber \\&+\,m(V+Q)- m(U+Q)- m(V+P)) \end{aligned}$$
Then by (41) we have:

Proposition 23

For the quadratic form D and the bilinear form \(\varepsilon \) defined by (42) and (43) the following inequality holds:
$$\begin{aligned} \varepsilon (x,y)\le \sqrt{D(x)}\cdot \sqrt{D(y)}. \end{aligned}$$

Now we shall prove the next lemma. Namely

Lemma 24

Suppose that \(x\in {\mathcal {S}} \ni y\) are such, that:
  1. (a)


  2. (b)


Then x and y are homothetic, i.e. there exists \(\lambda \in \mathbb R\) such that \(x=\lambda y\).


Suppose, that x and y are as above. It follows from (b) that:
$$\begin{aligned} 0=D(x-y)= D(x)-2\varepsilon (x,y)+D(y), \end{aligned}$$
$$\begin{aligned} 2\varepsilon (x,y)= D(x)+D(y), \end{aligned}$$
and since \(D\ge 0\) we have:
$$\begin{aligned} 0\le D(x)+D(y)=2\varepsilon (x,y)\le \sqrt{D(x)}\cdot \sqrt{D(y)}. \end{aligned}$$
$$\begin{aligned} \left( \sqrt{D(x)}-\sqrt{D(y)}\right) ^2\le 0 \end{aligned}$$
and in consequence \(D(x)=D(y)\).
Now we use the condition (a), i.e. \(o(x)=o(y)\). We have:
$$\begin{aligned} 0=D(x-y)= o^2(x-y)-4\pi m(x-y) \end{aligned}$$
and since \(o(x-y)=0\) then \(m(x-y)=0\). This means that
$$\begin{aligned} 2m(x)+2m(y)-m(x+y)=0. \end{aligned}$$
The equality \(D(x)=D(y)\) implies
$$\begin{aligned} o^2(x)-4\pi m(x)=o^2(y)-4\pi m(y). \end{aligned}$$
Thus, since \(o(x)=o(y)\) then \(m(x)=m(y)\). Now since \(m(x)\ge 0\) and \(m(y)\ge 0\) (since \(x\in \mathcal {S}\ni y\)) then we have:
$$\begin{aligned} 4m(x)=2m(x)+ 2m(y) = m(x+y), \end{aligned}$$
$$\begin{aligned} 2\sqrt{m(x)}= \sqrt{m(x+y)}. \end{aligned}$$
But x and y are from \(\mathcal {S}\), so we are able to apply the Brun–Minkowski equality and we have:
$$\begin{aligned} \sqrt{m(x+y)}\ge \sqrt{m(x)} + \sqrt{m(y)} = 2\sqrt{m(x)}. \end{aligned}$$
In consequence for the considered vectors x and y in the Brun–Minkowski inequality the equality holds. It is known that in such a case x and y are homothetic. \(\square \)

Now we are ready to prove Theorem 21.


Suppose, that \(w=[U,V]\) is such, that \(D([U,V])=0\). Let \(z=w-rB\), where r is such that \(o(z)=0\). In other words r is such, that \(o([U,V])=2\pi r\). We shall calculate the error term of z, namely
$$\begin{aligned} D(z)=D(w)-2r\varepsilon (x,B)+r^2D(B). \end{aligned}$$
Clearly \(D(B)=0\) and by our assumption \(D(w)=0\), hence
$$\begin{aligned} D(z)=-2r\varepsilon (x,B). \end{aligned}$$
We know the exact Formula (43) for \(\varepsilon (u,v)\). Namely
$$\begin{aligned}&\varepsilon ([U,V];[B,\left\{ {\theta }\right\} ])= o([U,V])\cdot o([B,\left\{ {\theta }\right\} ])\\&\qquad -\,2\pi (m(U+B)+m(V)-m(U)-m(V+B))\\&\quad =2\pi o([U,V])-2\pi (m(U)+o(U)+\pi +m(V)-m(U) - m(V) - o(V) -\pi )\\&\quad =2\pi o([U,V])-2\pi o([U,V])=0. \end{aligned}$$
But we can write \(z=[U,V]-r[B,\left\{ {\theta }\right\} ]\), thus
$$\begin{aligned} z=[U,V+rB]. \end{aligned}$$
Hence we have \(D(U-(V+rB))=0\), \(o(U-(V+rB))=0\) and both \(x=U\) and \(y=V+rB\) are from \(\mathcal {S}\). Then it follows from Lemma 24, that for some real \(\lambda \) there is: \(V+rB=\lambda U\). But \(o(U)-o(V) = 2\pi r\), and from the linearity of o we have \(o(V)+ 2\pi r= \lambda o(U)\). In consequence \(o(V) + o(U)- o(V) = \lambda o(U)\). This means that \(\lambda = 1\), i.e. \(U=V+rB\) or \([U,V]=r[B,\left\{ {\theta }\right\} ]\) and this ends the proof of Theorem 21. \(\square \)

4 Part III

4.1 A Generalization of the Brun–Minkowski Inequality

The classical Brun–Minkowski inequality, used in previous section, says in particular that for each \(U\in \mathcal {S}\ni V\) the following inequality holds:
$$\begin{aligned} \sqrt{m(U+V)}\ge \sqrt{m(U)}+\sqrt{m(v)} \end{aligned}$$
In this section we will formulate and prove an inequality (46), which may be considered as the Brun–Minkowski inequality for the generalized measure \(m^{*}\).

4.2 Some Remarks on Quadratic Forms

We shall start this chapter by recalling some properties of quadratic forms on real vector spaces.

1. Let X be a real vector space and let \(\eta :X\longrightarrow \mathbb R\) be a quadratic form on X, i.e. \(\eta \) is a homogeneous polynomial of the second degree. This means that there exists a bilinear, symmetric form \(\widetilde{N}:X\times X\longrightarrow \mathbb R\) such that \(\eta (x) = \widetilde{N}(x,x)\).

2. In the notations as above, a form \(\eta \) is said to be positively (negatively) defined, when \(\eta (x)=0\) implies \(x=\theta \). We will say also, that \(\eta \) is elliptic. Equivalently, “elipticity” means, that the set of values of \(\eta \) is \([0,\infty )\) or \((-\infty ,0]\).

We will also consider the indefinite forms, i.e. forms for which \(\eta :X\longrightarrow \mathbb R\) is surjective. In such a case we will also say that the form \(\eta \) is hyperbolic. Clearly, a form \(\eta \) is hyperbolic, if and only if there exists two vectors \(x\in X\ni y\) such that \(\eta (x)>0\) and \(\eta (y)<0\).

3. In this paper we will consider the quadratic forms, which will be hyperbolic, but of some special type i.e. satisfying some additional property. Before defining this property, let us observe, that when we have a quadratic form \(\eta :X\longrightarrow \mathbb R\) and we take into account any subspace \(Y\subset X\), then the restriction \(\eta |_{Y}\) is a quadratic form on Y. If \(\eta \) is of elliptic type, then for each Y the form \(\eta |_{Y}\) is elliptic. But in the case when \(\eta \) is hyperbolic, then \(\eta |_{Y}\) in general may not be hyperbolic.

4. Let \(u\in X \ni v\) be two vectors, which are linearly independent. Let \(Y(u,v):=Lin(u,v)\) be a two dimensional subspace spanned by u and v. We will say, that a quadratic form \(\eta \) is \((u,v)-hyperbolic\), when \(\eta |_{Y(u,v)}\) is hyperbolic. Let us consider the situation as above. Let \(\eta \) be a quadratic form on X. We will prove the following lemma:

Lemma 25

Suppose that the following conditions are fulfilled:
  1. 1.

    There is a vector \(b\in X\), such that the form \(\eta \) is positively defined on the one dimensional subspace \(\mathbb R\cdot b\),and

  2. 2.

    There is a linear functional \(b^{*}\) on X such that \(\eta \) is negatively defined on the subspace \(Y= ker {b}^{*}\). Then the form \(\eta \) is hyperbolic on each plane L(uv) generated by two linearly independent vectors u and v, such that \(\eta (u)> 0\) or \(\eta (v)> 0\).



Suppose, that u and v are two linearly independent vectors such that \(\eta (u)> 0\) and \(\eta (v)> 0\). Consider the straight line \(\mathbb R \ni t \longrightarrow u+t\cdot v\). We check, that there exists a vector \(w\in L(u,v)\), such that \({b}^{*}(w)=0\). Indeed, it is sufficient to take
$$\begin{aligned} t=\frac{-\,{b}^{*}(u)}{{b}^{*}(v)}. \end{aligned}$$
It follows from our assumptions that \(\eta (w)< 0\) and thus L(uv) contains two vectors, namely u and w, such that \(\eta (u)>0\) and \(\eta (w)<0\). This is sufficient for the form \(\eta \) to be hyperbolic on L(uv). \(\square \)

Observation 26

Let \(u\in X\ni v\) be such, that \(\eta (u)>0\) and \(\eta (v)>0\), and let \(\widetilde{N}\) be a bilinear symmetric form generating \(\eta \) (i.e. \(\widetilde{N}(x,x)= \eta (x)\)). Then the following inequality holds:
$$\begin{aligned} {\widetilde{N}}^{2}(u,v)\ge \eta (u)\cdot \eta (v). \end{aligned}$$


Indeed, we know from Lemma 25, that the quadratic equation \(\eta (u+t\cdot v)=0\) has a solution. This equation can be written in the form:
$$\begin{aligned} \eta (u)+2\cdot t\cdot \widetilde{N} + t^{2}\cdot \eta (v) = 0. \end{aligned}$$
Hence we have
$$\begin{aligned}\Delta = (2\widetilde{N}(u,v))^{2} - 4\eta (u)\eta (v) \ge 0\end{aligned}$$
and this ends the proof. \(\square \)

4.3 A Corollary for the Generalized Lebesgue Measure

Let, as above, m denote the Lebesgue measure on \(X_{\mathcal {S}}\), which is, as we know, a quadratic form. Let B denote the unit disc. It is easy to check, that m satisfies the assumptions of the Lemma 25. Indeed m is positively defined on one dimensional subspace generated by \(b=B\) and as the functional \({b}^{*}\) we take the perimeter functional \([U,V]\longrightarrow o(U)-o(V)\). If \(o([U,V]=0\) then it follows from the generalized isoperimetric inequality, that \(m([U,V])\le 0\) and \(m([U,V])=0\) only when \([U,V]=0\). Hence m is negatively defined on the kernel of the functional o.

Now we will present some more general version of the Brun–Minkowski inequality. Let [UV] and [PQ] be two vectors from \(X_{\mathcal {S}}\) having a positive measure (i.e. such that \(m([U,V])>0\) and \(m([P,Q])>0\)). It follows from the considerations made above for \(\eta =m\) and \(\widetilde{N}=\widetilde{M}\), that
$$\begin{aligned} \widetilde{M}^{2}([U,V],[P,Q])\ge m([U,V])\cdot m([P,Q]). \end{aligned}$$
or using only the measure m in \(\mathcal {S}\) and Minkowski addition we may write the inequality (46) in the following form:
$$\begin{aligned}&(m(U+P)+m(V+Q)-m(U+Q)-m(V+P))^{2}\\&\quad \ge (2m(U)+2m(V)-m(U+V))\cdot (2m(P)+2m(Q)-m(P+Q)). \end{aligned}$$
Inequality (46 may be considered as a generalization of the classical Brunn–Minkowski inequality. Indeed, for any \(x\in X_{\mathcal {S}}\ni y\) we have: \(2\cdot \widetilde{M}(x,y)=m(x+y)-m(x)-m(y).\) Hence (46) can be now rewritten in the form:
When \(m(x)>0\) and \(m(y)>0\) then
$$\begin{aligned} \left( \frac{1}{2}(m(x+y)-m(x)-m(y))\right) ^{2}\ge m(x)\cdot m(y) \end{aligned}$$
The classical Brunn–Minkowski inequality written in the form
$$\begin{aligned} \sqrt{m(x+y)}\ge \sqrt{m(x)}+ \sqrt{m(y)} \end{aligned}$$
$$\begin{aligned} m(x+y)\ge m(x)+ m(y) + 2\sqrt{m(x)\cdot m(y)} \end{aligned}$$
end equivalently
$$\begin{aligned} \left( \frac{1}{2}(m(x+y)-m(x)-m(y))\right) ^2\ge m(x)\cdot m(y), \end{aligned}$$
which is identical with (47).

It is clear, that (47) is true also, when at least one of vectors x or y has non-negative measure. However (47) may not be true, when both x and y have negative measure.

It is also easy to check, that the equality in the generalized Brun–Minkowski inequality, i.e. in the inequality (46) holds if and only if when \(x=[U,V]\) and \(y=[P,Q]\) are homothetic. Namely, let \(\mathbb {L}\) be a two dimensional subspace of \(X_{\mathcal {S}}\). Then \(\mathbb {L}\) has non-trivial intersection with the kernel of the perimeter functional o. This implies, that at least for one vector \(w\in \mathbb {L}\) we have \(m(w)<0\). If we know that for some vector \(u\in \mathbb {L}\) there is \(m(u)>0\) then m restricted to \(\mathbb {L}\) is a hyperbolic form on \(\mathbb {L}\). This means that there exists a linear isomorphism
$$\begin{aligned} T:{\mathbb {R}}^2\ni (x_1,x_2)\longrightarrow T(x_1,x_2)\in \mathbb {L} \end{aligned}$$
such that
$$\begin{aligned} m(T(x_1,x_2))=x_1^2-x_2^2. \end{aligned}$$
The equality in the generalized Brun–Minkowski inequality holds if and only if the trinomial \(f(t)=m(x+t\cdot y)\) has exactly one root. But this is possible only when x and y are linearly dependent.

5 Part IV

Connection to Hilbert Space.

5.1 Definition of an Inner Product

We shall return now the space \( X_{\mathcal {S}}\). We define in this space a bilinear form given by the following formula:
$$\begin{aligned} <[U,V];[P,Q]> = 2o([U,V])\cdot o([P,Q]) - 4\pi \cdot \widetilde{M}([U,V]);[P,Q]) \end{aligned}$$
The form (48) is in fact bilinear, since the perimeter functional o is linear and the bilinearity of \(\widetilde{M}\) was proved in Part I.
We observe now that this form is positively defined on \(X_\mathcal {S}\). Indeed if
$$\begin{aligned} 0= & {} <[U,V];[U,V]> = 2o^2([U,V])- \widetilde{M}([U,V];[U,V]) \\= & {} o^2([U,V]) + (o^2([U,V]) - \widetilde{M}([U,V];[U,V])) \\= & {} o^2([U,V]) + D([U,V]), \end{aligned}$$
where D is the deficit term defined by Formula (42). Hence \(o^2([U,V])=0\) and \(D([U,V])=0\). It follows from Theorem 21, that \(([U,V])= rB\) where B is the unit disc. Since \(0=o^2([U,V])= r^2o^2(B)\) then \(r=0\) and in consequence \(([U,V])=0\).
Hence \((X_{\mathcal {S}}; <;>)\) is an unitary space, which after completion (if necessary) gives a model of separable Hilbert space. If one wants to have the unique disk with the norm 1, some renorming coefficient is needed. Namely, such a norm has a form:
$$\begin{aligned} ||[U,V]||^{2} = \frac{1}{4\pi ^{2}}(2o^{2}([U,V])-4\pi \cdot m([U,V])). \end{aligned}$$
The corresponding inner product has the form
$$\begin{aligned} <[U,V];[P,Q]>= & {} \frac{1}{4\pi ^{2}}(2(o([U,V])\cdot (o([P,Q])- 2\pi (m(U+P)\nonumber \\&+\,m(V+Q)-m(U+Q)-m(V+P)). \end{aligned}$$

5.2 The Constructed Space is an RKHS

The elements of the space \(X_{\mathcal {S}}\) are the equivalent classes of the pairs (UV) of convex and centrally symmetric sets U and V. In appears, that the vectors from \(X_{\mathcal {S}}\) may be also considered as some periodic and continuous real functions on \(\mathbb R\). More precisely, let \(\mathcal {F}\) denotes the space of all periodic (with the period \(\pi \)) continuous real functions equipped with standard addition and scalar multiplication. Let \(\varphi \in [0,\pi ]\) and let \(I^{\varphi }\) be a diangle, whose argument is \(\varphi \). We have considered above the width functionals associated with a diangle defined in Sect. 2.2. Using this functionals we can prove the following:

Proposition 27

The map
$$\begin{aligned} \omega :X_{\mathcal {S}}\ni [U,V]\longrightarrow \overline{U}(I^{\varphi })-\overline{V}(I^{\varphi }):=[U,V](\varphi )\in \mathcal {F}, \end{aligned}$$
is linear and injective.


The independence on the choice of a representative follows directly from the linearity of the width functionals on \(\mathcal {S}\) and from the definition of the equivalence relation \(\diamond \). The injectivity is a consequence of the Radström Lemma [6] and the linearity of \(\omega \) follows directly from the definition of addition and scalar multiplication. Also continuity and periodicity are easy to check. \(\square \)

It will be more convenient to consider the vectors from \(X_{\mathcal {S}}\) as the continuous functions f on the interval \(\Delta =[0,\pi ]\) such that \(f(0)=f(\pi )\). We shall prove that the space \(X_{\mathcal {S}}\) is a reproducing kernel Hilbert space (RKHS for short). The reproducing kernel Hilbert spaces was discovered at the beginning of XX-th century by Zaremba. A general theory of RKHS was formulated by Aronszajn in [1]. An elegant introduction to this theory is in Szafraniec book [7]. The necessary definitions concerning RKHS are to be found in [5].

Let us fix \(\varphi \in [0,\pi ]\) and consider a function
$$\begin{aligned} k_{\varphi }:[0,\pi ]\ni \psi \longrightarrow k_{\varphi }(\psi )=\left[ 2B,\frac{\pi }{2}I^{\varphi }\right] (\psi ), \end{aligned}$$
where B is the unit disc and \(I^{\varphi }\) is a diangle defined by the vector \(\mathbf{{v}}=(cos\varphi , sin\varphi )\).

Proposition 28

For each \([U,V]\in X_{\mathcal {S}}\) the following equality holds:
$$\begin{aligned} \overline{U}(\varphi )-\overline{V}(\varphi ) = <[U,V];\left[ 2B,\frac{\pi }{2}I^{\varphi }\right] >. \end{aligned}$$
Let us denote \(d=\frac{\pi }{2}\) and \(I^{\varphi }= I\). Using the Formula (50) we obtain:
$$\begin{aligned}&<[U,V];\left[ 2B,\frac{\pi }{2}I^{\varphi }\right]>= <[U,V];[2B,dI]> \\&\quad =\frac{1}{4\pi ^{2}}(2(o([U,V])\cdot (o([2B,dI])\\&\qquad -\, 2\pi (m(U+2B)+m(V+dI)-m(U+dI)-m(V+2B))\\&\quad =\frac{1}{4\pi ^{2}}(2(o(U)-o(V))\cdot 2\pi - 2\pi (m(U)+4\pi + 2o(U) -m(V)-4\pi - 2o(V)\\&\qquad +\, m(V) + 2\cdot 2d \cdot \overline{V}(I) - m(U) - 2\cdot 2d \cdot \overline{U}(I)))\\&\quad = \frac{1}{4\pi ^{2}}((4\pi (o(U)-o(V))- 2\pi (2(o(U)-o(V))-2\pi (\overline{U}(I)-\overline{V}(I))) \\&\quad =\frac{1}{4\pi ^{2}}((4\pi (o(U)-o(V))-(4\pi (o(U)-o(V)) + 4\pi ^{2}(\overline{U}(I)-\overline{V}(I)))\\&\quad =\overline{U}(I)-\overline{V}(I). \end{aligned}$$
In the language of RHKS the functions \(k_{\varphi }\) are called the kernel functions and Proposition 28 asserts precisely, that the space \(X_{\mathcal {S}}\) with the inner product given by (50) has the reproducing property. Now we shall calculate the reproducing kernel of this space. As we know, the reproducing kernel is—in our case—a function \(K:\Delta \times \Delta \longrightarrow \mathbb R\) given by the formula:
$$\begin{aligned} K(\varphi ,\psi )=<k_{\varphi },k_{\psi }>. \end{aligned}$$
Let us calculate.
$$\begin{aligned}&<k_{\varphi },k_{\psi }> = <\left[ 2B,\frac{\pi }{2}I^{\varphi }\right] ; \left[ 2B,\frac{\pi }{2}I^{\psi }\right] >\\&\quad = \frac{1}{4\pi ^{2}}\left( 2\cdot 2\pi \cdot 2\pi - 4\pi \cdot \frac{1}{2}\cdot \left( m(2B+2B)+m\left( \frac{\pi }{2}I^{\varphi }+\frac{\pi }{2}I^{\psi }\right) \right. \right. \\&\qquad -\,m\left( 2B+\frac{\pi }{2}I^{\varphi }\right) -\left. \left. m\left( 2B+\frac{\pi }{2}I^{\psi }\right) \right) \right. \\&\quad =\frac{1}{4\pi ^{2}}\left( 8\pi ^{2}-2\pi \left( 16\pi + \frac{\pi ^{2}}{4}\cdot 4\cdot sin|\varphi -\psi |-4\pi -4\pi -4\pi -4\pi \right) \right) \\&\quad \left. =\frac{1}{4\pi ^{2}}(8\pi ^{2}-2\pi ^{3}\cdot sin|\varphi -\psi |)\right) \\&\quad =2-\frac{\pi }{2}\cdot sin|\varphi -\psi |. \end{aligned}$$
Hence we have proved the following:

Theorem 29

The space \((X_{\mathcal {S}};<;>)\) is a reproducing kernel Hilbert space on \([0,\pi ]\) and its kernel is given by the formula
$$\begin{aligned} K(\varphi ,\psi )=2-\frac{\pi }{2}\cdot sin|\varphi -\psi |. \end{aligned}$$
It follows from the reproducing property, that each evaluation functional is bounded. In our case this means that for each \(\varphi \in [0,\pi ]\) there exists a constant (in general depending on \(\varphi \)) \(C(\varphi )\), such that for each \([U,V]\in X_{\mathcal {S}}\) there is:
$$\begin{aligned} E_{\varphi }([U,V])= \overline{U}(\varphi )-\overline{V}(\varphi )\le C(\varphi )\cdot ||[U,V]||. \end{aligned}$$
Since, as we have observed in Proposition 28, the evaluation functional are given by the formula
$$\begin{aligned} E_{\varphi }([U,V])=\overline{U}(\varphi )-\overline{V}(\varphi ) = <[U,V];\left[ 2B,\frac{\pi }{2}I^{\varphi }\right] >, \end{aligned}$$
then, using the Schwarz inequality, we have
$$\begin{aligned} |E_{\varphi }([U,V])|\le ||[U,V]||\cdot ||\left[ 2B,\frac{\pi }{2}I^{\varphi }\right] ||\le \sqrt{2}||[U,V]||. \end{aligned}$$
This follows from the equality
$$\begin{aligned} \left| \left| \left[ 2B,\frac{\pi }{2}I^{\varphi }\right] \right| \right| ^{2}=2, \end{aligned}$$
which is easy to check. We see that the constant \(C(\varphi )\) does not depend on \(\varphi \), which is clear, since the norm in \(X_{\mathcal {S}}\) is invariant under rotations in the plane. This fact has an important consequence. It is known, that the space \(X_{\mathcal {S}}\) can be equipped with another norm given by the formula:
$$\begin{aligned} ||[U,V]||_{c}=\sup \left\{ |\overline{U}(\varphi )-\overline{V}(\varphi )|:\varphi \in [0,\pi ]\right\} . \end{aligned}$$
This is a norm in \(X_{\mathcal {S}}\) and as it can be easily proved
$$\begin{aligned} ||[U,V]||_{c}=\rho _{H}(U,V) \end{aligned}$$
where \(\rho _{H}\) is a Hausdorff distance in the space of compact sets. It follows from inequality (52), that for each \([U,V]\in X_{\mathcal {S}}\) we have
$$\begin{aligned} ||[U,V]||_{c}\le \sqrt{2}||[U,V]||. \end{aligned}$$
It can be proved, that the space \(X_{\mathcal {S}}\) equipped with the norm (53) after completion is isomorphic to a space C(K) of continuous function on some compact set K. One considers also another norm on \(X_{\mathcal {S}}\), namely
$$\begin{aligned} ||[U,V]]||_{s}=\inf \left\{ (||P||_c + ||Q||_c):(U,V)\diamond (P,Q)\right\} \end{aligned}$$
This norm is stronger and not equivalent than the Hilbertian norm (49) as one can show, that the space \(X_{\mathcal {S}}, ||[.]||_s\) is complete. Hence the norm (49) is not complete.

Remark 30

The reproducing kernel considered in this paper, i.e. the function \(K(t,s)= 2-\frac{\pi }{2}\cdot \sin |t-s|\) defined on the interval \([0,\frac{\pi }{2}\) can be generalized to higher dimensions. Namely, if S denotes the unit sphere in \(\mathbb R^{n}\), then the function on
$$\begin{aligned} S\times S \ni (x,y)\longrightarrow 1-\frac{\pi }{2}\cdot \sqrt{1-<x,y>^2}, \end{aligned}$$
where \(<x,y>\) is an inner in \(\mathbb R^{n}\) is a reproducing kernel. The result will be a subject of the consecutive paper.


  1. 1.
    Aronszajn, N.: Theory of reproducing kernels. Trans. Am. Math. Soc. 68, 337–404 (1950)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Hörmander, L.: Sur la fonction d’appui des ensembles convexes dans un espace localement convexe. Ark. Math. 3, 181–186 (1954)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Klain, D.: An error estimate for the isoperimetric deficit. Ill. J. Math. 49, 981–992 (2005)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Moszyńska, M.: Geometria zbiorów wypukłych. Wydawnictwa Naukowo-Techniczne, Warszawa (2001). (in Polish)Google Scholar
  5. 5.
    Paulsen, V.I.: An Introduction to the Theory of Reproduction Kernel Hilbert Spaces. Department of Mathematics, University of Huston, Texas.
  6. 6.
    Radström, H.: An embedding theorem for spaces of convex sets. Proc. Am. Math. Soc. 3(1), 165–169 (1952)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Szafraniec, F.H.: The Reproducing Kernel Property and Its Space: The Basics. Springer, Berlin (2015)zbMATHGoogle Scholar
  8. 8.
    Treibergs, A.: Inequalities that Imply the Isoperimetric Inequality. University of Utah. (2002).

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Higher Vocational SchoolTarnowPoland

Personalised recommendations