Convergent Geometric Estimators with Digital Volume and Surface Integrals
 536 Downloads
Abstract
This paper presents several methods to estimate geometric quantities on subsets of the digital space \(\mathbb {Z}^d\). We take an interest both on global geometric quantities like volume and area, and on local geometric quantities like normal and curvatures. All presented methods have the common property to be multigrid convergent, i.e. the estimated quantities tend to their Euclidean counterpart on finer and finer digitizations of (smooth enough) Euclidean shapes. Furthermore, all methods rely on digital integrals, which approach either volume integrals or surface integrals along shape boundary. With such tools, we achieve multigrid convergent estimators of volume, moments and area in \(\mathbb {Z}^d\), of normals, curvature and curvature tensor in \(\mathbb {Z}^2\) and \(\mathbb {Z}^3\), and of covariance measure and normals in \(\mathbb {Z}^d\) even with Hausdorff noise.
Keywords
Digital geometry Volume estimation Moments estimation Normal estimation Curvatures estimation Area estimation Multigrid convergence Digital integration Integral invariants Digital moments Voronoi covariance measure Stability1 Introduction
Objectives. We are interested in the geometry of subsets of the digital space \(\mathbb {Z}^d\), where \(\mathbb {Z}\) is the set of integer numbers. More precisely, when seeing these subsets as a sampling of a Euclidean shape, say X, we would like to recover an approximation of the geometry of X with solely the information of its sampling. It is clear that this task cannot be done without further hypothesis on X and on the sampling method. First, at a fixed sampling resolution, there are infinitely many shapes having the same sampling. Second, subsets of \(\mathbb {Z}^d\) have no canonic tangent plane or differential geometry. To address the first issue, we will take a look at specific families of Euclidean shapes, generally by requiring smoothness properties. We will then show that we can achieve multigrid convergence properties for some estimators on such shapes, i.e. when the sampling gets finer and finer, the estimation gets better. The second issue is addressed by using digital integrals, i.e. wellchosen sums.
This paper presents the main ingredients and results of three methods that provide multigrid convergent estimators of the most common geometric quantities: volume, area, tangent, normal, curvatures, etc. Their common denominator is to use digital integrals, i.e. sums that approach integrals defined on the Euclidean shape. The stability of these integrals in turns induces the multigrid convergence of these estimators.
This topic is in fact rather old, since Gauss and Dirichlet already knew that the volume of a convex set can be approximated by counting digital points within (reported in [KR04]). Furthermore, it is related to numerical integration. The purpose of this paper is not to provide an exhaustive lists of multigrid convergent digital estimators. We may point several sources and surveys in the literature that provides many references or comparisons: [KŽ00, CK04, dVL09, CLR12]. This work compiles methods and results developed in several papers [CLL13, CLL14, LCLng, LT15, CLT14, CLMT15]. Note that the topic of digital geometric estimation through integrals is very active at the present time. Among the very recent works, we may quote the varifold approach of [Bue14, BLM15] for normal and mean curvature estimation, the estimation of intrinsic volumes of [EP16] with persistent homology, or the estimation of Minkowski tensors with an extension of the Voronoi covariance measure [HKS15].
Main Definitions and Notations. A digitization process is a family of maps mapping subsets of \(\mathbb {R}^d\) towards subsets of \(\mathbb {Z}^d\), parameterized by a positive real number h. The parameter h defines the gridstep of the digitization, a kind of sampling distance. The digitization process \(\mathtt {D}_h\) is local whenever \(\mathbf {z}\in \mathtt {D}_{h}(X)\) depends solely on \(X \cap N(h\mathbf {z})\), where \(N(h\mathbf {z})\) is a neighborhood of radius O(h) around point \(h\mathbf {z}\in \mathbb {R}^d\).
First Relations Between Shape and Its Digitization. We will need to compare the geometry of the Euclidean shape X and its topological boundary \(\partial X\) with the “geometry” of their digitizations. So, for \(Z\subset \mathbb {Z}^d\), we define the body of Z at step h as \([Z ]_{h} :=\bigcup _{\mathbf {z}\in Z} Q_{h}({\mathbf {z}})\). We call Jordan strip the digitization \(\mathtt {J}^0_{h}(X) :=\mathtt {J}^+_{h}(X) {\setminus } \mathtt {J}^_{h}(X)\), which is a kind of digitization of \(\partial X\). The following properties clarify the relations between X, \(\partial X\) and their digitizations and are easy to derive [LCLng].
Lemma 1
\(\mathtt {J}^_{h}(X) \subset \mathtt {G}_{h}(X) \subset \mathtt {J}^+_{h}(X)\) and \([\mathtt {J}^_{h}(X) ]_{h} \subset X\subset [\mathtt {J}^+_{h}(X) ]_{h}\).
Lemma 2
\([\mathtt {J}^0_{h}(X) ]_{h} = [\mathtt {J}^+_{h}(X) ]_{h} {\setminus } \mathrm {Int}([\mathtt {J}^_{h}(X) ]_{h})\) and \(\partial X \subset [\mathtt {J}^0_{h}(X) ]_{h}\).
In fact, we can be more precise and relate these sets with the Hausdorff distance. Recall that the \(\epsilon \) offset of a shape \(X\), denoted by \(X^\epsilon \), is the set of points of \(\mathbb {R}^d\) at distance lower or equal to \(\epsilon \) from \(X\). We can state with some elementary arguments that boundaries of Jordan digitizations are close to the boundary of the shape in the Hausdorff sense:
Lemma 3
([LCLng], Lemma 3). Let \(X\) be a compact domain of \(\mathbb {R}^d\). Then \([\mathtt {J}^0_{h}(X) ]_{h} \subset (\partial X)^{\sqrt{d}h}\), \(\partial [\mathtt {J}^_{h}(X) ]_{h} \subset (\partial X)^{\sqrt{d}h}\) and \(\partial [\mathtt {J}^+_{h}(X) ]_{h} \subset (\partial X)^{\sqrt{d}h}.\)
The remarkable point here is that the sole requirement on X is compactness ! If the shape X has a smoother boundary, we can get tighter bounds for the Gauss digitization. Therefore, let the medial axis \(\mathrm {MA}(\partial X)\) of \(\partial X\) be the subset of \(\mathbb {R}^d\) whose points have more than one closest point to \(\partial X\). The reach \(\mathrm {reach}(X)\) of \(X\) is the infimum of the distance between \(\partial X\) and its medial axis. Shapes with positive reach have a \(C^2\) smooth boundary almost everywhere and have principal curvatures bounded by \(\pm 1 / \mathrm {reach}(X)\). We then have:
Theorem 1
([LT15]). Let \(X\) be a compact domain of \(\mathbb {R}^d\) such that the reach of \(\partial X\) is greater than \(\rho \). Then, for any digitization step \(0< h < 2\rho /\sqrt{d}\), the Hausdorff distance between sets \(\partial X\) and \(\partial [\mathtt {G}_{h}(X) ]_{h}\) is less than \(\sqrt{d}h/2\). In particular, \(\partial [\mathtt {G}_{h}(X) ]_{h} \subset \left( \partial X\right) ^{\frac{\sqrt{d}}{2}h}\).
The projection \(\pi ^X\) of \(\mathbb {R}^d {\setminus } \mathrm {MA}(\partial X)\) onto \(\partial X\) is the map which associates to any point its closest point on \(\partial X\). From the properties of the medial axis, the projection is defined almost everywhere in \(\mathbb {R}^d\). We may thus associate to any point \(\hat{\mathbf {x}} \in \partial [\mathtt {G}_{h}(X) ]_{h}\) the point \(\mathbf {x}:= \pi ^X(\hat{\mathbf {x}}) \in \partial X\), such that the distance between \(\hat{\mathbf {x}}\) and its projection \(\mathbf {x}\) is smaller than \(\sqrt{d}h/2\). We have just constructed a mapping between a shape boundary and its digitization, which will help us for defining local geometric estimators.
Multigrid Convergence. Let V be any vector space (generally \(\mathbb {R}\) or \(\mathbb {R}^d\)). A geometric quantity is an application that associates a value in V to any subset of \(\mathbb {R}^d\), with the property that it is invariant to some group operations, most often the group of rigid transformations. Notable examples are the volume and the area. A local geometric quantity is an application that associates a value in V to a subset X of \(\mathbb {R}^d\) and a point \(\mathbf {x}\) on \(\partial X\). Common examples are the normal vector, the mean curvature or principal curvatures and directions. A discrete geometric estimator is an application that associates a value in V to a subset of \(\mathbb {Z}^d\) and a gridstep \(h \in \mathbb {R}^+\). A local discrete geometric estimator is an application that associates a value in V to a subset Z of \(\mathbb {Z}^d\), a point in \(\partial [Z ]_{h}\) and a gridstep \(h \in \mathbb {R}^+\).
Definition 1
In both definitions, the multigrid convergence property characterizes estimators that give better and better geometric estimates as the grid sampling gets finer and finer. We have now all the notions to study the multigrid convergence of several discrete geometric estimators.
2 Volume and Moments Estimators
In this section, X is some compact domain of \(\mathbb {R}^d\) and Z is a subset of \(\mathbb {Z}^d\). We take here an interest in estimating volume and moments from digital sets. These results will be used to define digital integral invariant estimators of curvatures in the following section. In the whole section, let \((p_i)_{i=1\ldots d}\) be the integers defining the moment exponents, with \(0 \le p_i \le 2\), and let \(\sigma :=p_1+\cdots + p_d\), with \(\sigma \le 2\).
It is well known that \(\widehat{\mathrm {Vol}}^{d}\) is multigrid convergent toward \(\mathrm {Vol}^{d}\) for the family of convex shapes and the Gauss digitization, with a convergence speed of O(h), and even faster for smoother shapes [Hux90, KN91, M99, Guo10]. We wish to go further on multigrid convergence of moments, so we take a special interest in (digital) moments of hcubes. The following equalities, obtained by simple integration, show that discrepancies between digital and continuous moments begin with order two, and only when one \(p_i = 2\).
Lemma 4
Let \(\mathbf {z}\in \mathbb {Z}^d\). Point \(\mathbf {z}\) is the Gauss digitization of hcube \(Q_{h}({\mathbf {z}})\), but also its inner or outer Jordan digitization. Moments and digital moments of hcubes satisfy \(\hat{m}_{h}^{p_1 \cdots p_d}(\{\mathbf {z}\}) = m^{p_1 \cdots p_d}(Q_{h}({\mathbf {z}})) + E(p_1,\ldots ,p_d)\), where \(E=\frac{h^{d+4}}{12}\) when one \(p_i\) equals 2 and otherwise \(E=0\).
Errors in Volume Estimation. The following volume “convergence” theorem is remarkable since it requires only the compactness of X. Its proof requires Lemma 4, a volume relation on symmetric difference of sets, the definition of Jordan strip, and Lemma 3.
Theorem 2
Another proof, written independently, is in [HKS15]. This theorem states a multigrid convergence property whenever \(\partial X\) is \(d1\)rectifiable, but not in the general case: consider for instance the set X of rational numbers in the unit cube. A more useful — but more restricted — convergence theorem relates this error bound to the area of \(\partial X\). It uses Theorem 2 but also the fact that, for sets X with positive reach, the volume of some \(\epsilon \)offset of \(\partial X\) is upperbounded by a constant times the area of \(\partial X\) (Proof of Lemma 10 [LT15].
Theorem 3
With the same hypotheses as Theorem 2 with the further requirement that the reach of \(\partial X\) is greater than some value \(\rho \). For \(h < \rho /\sqrt{d}\), the volume estimator \(\widehat{\mathrm {Vol}}^{d}\) is multigrid convergent toward the volume \(\mathrm {Vol}^{d}\) with speed \(2^{d+1} \sqrt{d} \mathrm {Area}(\partial X) h\).
Volume and Moment Estimation in a Local Neighborhood. For the digital integral invariant method we will require convergence results on sets that are the intersection of two sets with positive reach, more precisely on sets of the form \(X \cap B_R(\mathbf {x})\), where \(B_R(\mathbf {x})\) denotes the ball of center \(\mathbf {x}\in \mathbb {R}^d\) and radius R. The previous theorem cannot be applied as is. We must first bound the volume of offsets of the boundary of \(X \cap B_R(\mathbf {x})\):
Theorem 4
The proof first decomposes the set with the equality \((\partial (A \cap B))^\epsilon = ((\partial A) \cap B)^\epsilon \cup (A \cap (\partial B))^\epsilon \). Then it uses differential geometry and the fact that curvatures are bounded by the reach. The good point is that the geometry of X does not intervene in the constants.
We have now all the keys to upperbound the error in volume estimation, and more generally moments estimation, within a ball around the boundary of a compact domain X.
Theorem 5
The proof decomposes the errors in each hcube induced by digitization. Errors in the interior of the set \(X \cap B_R(\mathbf {x})\) are easily solved with Lemma 4. Errors close to the boundary of \(X \cap B_R(\mathbf {x})\) are bounded with Theorem 4 and some care on moments with negative value.
3 Curvatures with Digital Integral Invariants
Curvature and Mean Curvature Estimation. If \(\mathbf {x}\in \partial X\) and \(\partial X\) smooth enough, one easily notices that the volume of \(X \cap B_R(\mathbf {x})\) is related to the local differential geometry of \(\partial X\) around \(\mathbf {x}\) for infinitesimal values of R. Several authors [BGCF95, PWY+07, PWHY09] have made explicit this relation:
Lemma 5
Since we have seen that we can approach volumes within a ball (see Theorem 5), it is very natural to define digital curvature estimators from the volume relations Eqs. (9) and (10).
Definition 2
We can bound the error between these estimators and their associated geometric quantities as ([CLL13], but precise constants in [LCLng]):
Theorem 6
The first term in each error bound comes from the error done in volume estimation of \(X \cap B_R(\mathbf {x})\) because point \(\hat{\mathbf {x}}\) is not exactly on \(\mathbf {x}\) but at distance O(h) (Theorem 1). The second term comes from the digitization in the volume estimation (Theorem 5). The third term in the error comes from the Taylor expansion of Eqs. (9) and (10). Since error terms are either decreasing with R or increasing with R, we balance error terms to minimize the sum of errors and we get convergence of curvature estimators as immediate corollaries.
Definition 3
Unfortunately, there is no hope in turning Theorem 5 into a multigrid convergence theorem for arbitrary moments, because a very small perturbation in the position of the ball center can lead to an arbitrary error on polynomial \(x^\sigma \). However, due to their formulations, continuous and digital covariance matrices are invariant by translation, and error terms can thus be confined in a neighborhood around zero. Using this fact and error bounds of moments within the symmetric difference of two balls, we get:
Theorem 7
([CLL14]). Let X be a compact domain of \(\mathbb {R}^3\) such that its boundary \(\partial X\) has reach greater than \(\rho \). Then the digital covariance matrix is multigrid convergent toward the covariance matrix for Gauss digitization for any radius \(R < \frac{\rho }{2}\) and gridstep \(h < \frac{R}{\sqrt{6}}\), with speed \(O(R^{4} h)\).
The constant in O is independent from the shape size or geometry. According to Definition 3, it remains to show that eigenvalues and eigenvectors of the digital covariance matrix are convergent toward the eigenvalues and eigenvectors of the continuous covariance matrix, when error on matrices tends to zero. Classical results of matrix perturbation theory (especially LidskiiWeyl inequality and DavisKahan \(\sin \theta \) Theorem) allow to conclude on the following relations:
Theorem 8
([CLL14, LCLng]). Let X be a compact domain of \(\mathbb {R}^3\) such that its boundary \(\partial X\) has reach greater than \(\rho \) and has \(C^3\)continuity. Then, for these shapes and for the Gauss digitization process, integral principal curvature estimators \(\hat{\kappa }_1^{R}\) and \(\hat{\kappa }_2^{R}\) are multigrid convergent toward \(\kappa _1\) and \(\kappa _2\) for small enough gridsteps h, choosing \(R=k h^{\frac{1}{3}}\) and k an arbitrary positive constant. Convergence speed is in \(O(h^{\frac{1}{3}})\). Furthermore, integral principal direction estimators \(\hat{\mathbf {w}}_1^{R}\) and \(\hat{\mathbf {w}}_2^{R}\) are also convergent toward \(\mathbf {w}_1\) and \(\mathbf {w}_2\) with speed \(O(h^{\frac{1}{3}})\) provided principal curvatures are distinct. Last the integral normal estimator \(\hat{\mathbf {n}}^{R}\) is convergent toward the normal \(\mathbf {n}\) with speed \(O(h^{\frac{2}{3}})\).
To our knowledge, these were the first estimators of principal curvatures shown to be multigrid convergent.
4 Digital Voronoi Covariance Measure
Integral curvatures estimators are convergent. They are rather robust to the presence of Hausdorff noise in input data (see [CLL14]). However, this robustness comes with no guarantee. We present here another approach that reaches this goal, and which can even be made robust to outliers. The first idea is to use a distance function to input data, which is stable to perturbations [CCSM11]. The second idea is to notice that Voronoi cells of input data tend to align with normals of the underlying shape. To make this idea more robust, it suffices to integrate the covariance of gradient vectors of the distance function within a local neighborhood: this is called Voronoi covariance measure [MOG11].
Definition 4
A function \(\delta :\mathbb {R}^d \rightarrow \mathbb {R}^+\) is called distancelike if (i) \(\delta \) is proper, i.e. \(\lim _{\Vert x\Vert \rightarrow \infty } \delta (x)= \infty \), (ii) \(\delta ^2\) is 1semiconcave, that is \(\delta ^2(.)  \Vert .\Vert ^2 \) is concave.
It is worthy to note that the standard distance \(d_K\) to a compact K is distancelike. Clearly this distance is robust to Hausdorff noise, i.e. \(d_H(K,K') < \epsilon \) implies \(\Vert d_K  d_{K'}\Vert _\infty <\epsilon \). Although we will not go into more details here, the distance to a measure [MOG11, CLMT15] is even resilient to outliers and the error is bounded by the Wasserstein2 distance between measures. Let \(\mathbf {N}_{\delta } := \frac{1}{2}\nabla \delta ^2\).
Definition 5
Note that \(\mathbf {N}_{\delta }\) is defined almost everywhere in \(\mathbb {R}^d\).
In the following, we define \(\delta \) as the distance to a compact \(d_K\) in all results, but one should keep in mind that these results are extensible to arbitrary distancelike functions. Then \(\mathbf {N}_{d_K}\) corresponds to the vector of the projection onto K, except for the sign. The \(d_K\)VCM corresponds then to the covariance matrix of Voronoi cells of points of K, restricted to a maximum distance R, and weighted by the probe function.
Definition 6
Since \(\mathbf {N}_{d_{Z_h}}\) corresponds to the projection onto \(Z_h\), the previous formulation is easily decomposed per Voronoi cells of \(Z_h\) and can be computed exactly by simple summations. The digital VCM is shown to be close to the VCM for digitizations of smooth enough shapes [CLT14]. Errors are related first to the difference between \(\partial X\) and its digitization \(\partial [\mathtt {G}_{h}(X) ]_{h}\) (essentially bounded by Theorem 5.1 of [MOG11]). Secondly they are linked to the transformation of the integral in \(\mathcal {V}_{\delta }^{R}\) in a sum in \(\hat{\mathcal {V}}_{Z,h}^{R}\) (bounded by the fact that the projection is stable and that the strip \(Z_h^R {\setminus } \mathrm {vox}(Z_h,R)\) is negligible). Proofs ressemble the ones of Sect. 2.
Theorem 9
As one can see, the digital VCM is an approximation of the VCM, but the quality of the approximation is related not only to the gridstep h but also to two parameters, the size r of the support of \(\chi \) and the distance R to input data, which defines the computation window.
Theorem 10
([CLT14]). With the same hypothesis as in Theorem 9, \(\chi \) bounded Lipschitz, and denoting \(\hat{\mathbf {n}}_{\chi }^R\) the eigenvector associated to the highest eigenvalue of \(\hat{\mathcal {V}}_{Z,h}^{R}(\chi )\), then \(\hat{\mathbf {n}}_{\chi }^R\) is multigrid convergent toward the normal vector to \(\partial X\), with speed \(O(h^{\frac{1}{8}})\) when both R and the support r of \(\chi \) are chosen in \(\varTheta (h^{\frac{1}{4}})\).
Experiments indicate a much faster convergence speed (close to O(h)) even in presence of noise. The discrepancy comes mainly from the fact that \(\chi \) is any bounded Lipschitz function while Eq. (18) is valid for the characteristic function of \(B_{r}(\mathbf {x})\).
5 Digital Surface Integration
Until now, convergence results where achieved by approximating wellchosen volume integrals around input data. What can we say if we wish to approach integrals defined over the shape boundary, given only the shape digitization. We focus here on Gauss digitization and we write \(\partial _{h}X\) for \(\partial [\mathtt {G}_{h}(X) ]_{h}\). A natural answer is to define a mapping between the digitized boundary \(\partial _{h}X\) and the continuous boundary \(\partial X\). Using standard geometric integration results, a surface integral defined over \(\partial X\) can be transformed into a surface integral over \(\partial _{h}X\) by introducing the Jacobian of this mapping. However, we have to face several difficulties in our case. The first one is that, starting from 3D, \(\partial _{h}X\) may not even be a manifold, whatever the smoothness of \(\partial X\) and the gridstep h [SLS07]. Hence, it is not possible to define an injective mapping between the two sets. The second difficulty is that the underlying continuous surface is unknown, so we have to define a natural mapping without further hypothesis on the shape. The best candidate is the projection \(\pi ^{\partial X}\) onto \(\partial X\), which is defined everywhere in the Roffset of \(\partial X\), for R smaller than the reach of \(\partial X\). It is nevertheless easily seen that \(\pi ^{\partial X}\), although surjective, is generally not injective between the digitized boundary and the continuous boundary. In the following, we bound these problematic zones in order to define convergent digital surface integrals. For simplicity, we write \(\pi \) for \(\pi ^{\partial X}\) and \(\pi '\) for its restriction to \(\partial _{h}X\).
Definition 7

for \(d=3\) and \(h<0.198\rho \), \(\partial _{h}X\) may not be a manifold only at places at distance lower than h to parts of \(\partial X\) whose normal makes an angle smaller than \(1.26h/\rho \) to some axis;

for arbitrary \(d\ge 2\) and \(h<\rho /\sqrt{d}\), let \(\mathbf {y}\in \partial _{h}X\) and \(\mathbf {n}_h(\mathbf {y})\) be its (trivial) normal vector; then the angle between the normal \(\mathbf {n}(\mathbf {x})\) to \(\partial X\) at \(\mathbf {x}=\pi (\mathbf {y})\) and \(\mathbf {n}_h(\mathbf {y})\) cannot be much greater than \(\pi /2\), since \(\mathbf {n}(\mathbf {x}) \cdot \mathbf {n}_h(\mathbf {y}) \ge \sqrt{3d}h/\rho \);

let \(\mathrm {Mult}({\partial X})\) be the points of \(\partial X\) that are images of several points of \(\partial _{h}X\) by \(\pi '\), then it holds for at least one of the point \(\mathbf {y}\) in the fiber of \(\mathbf {x}\in \mathrm {Mult}({\partial X})\) under \(\pi '\) that \(\mathbf {n}(\mathbf {x}) \cdot \mathbf {n}_h(\mathbf {y})\) is not positive;

the Jacobian of \(\pi '\) is almost everywhere \(\mathbf {n}(\mathbf {x}) \cdot \mathbf {n}_h(\mathbf {y})(1+O(h)\);

areas are related with \(\mathrm {Area}(\partial _{h}X) \le 2^{d+2} d^{\frac{3}{2}} \mathrm {Area}(\partial X)\);

hence, when \(h\le R/\sqrt{d}\), the area of the noninjective part of \(\pi '\) decreases with h: \(\mathrm {Area}(\mathrm {Mult}({\partial X})) \le K_2\ \mathrm {Area}(\partial X)\ h\), with \(K_2 \le 2\sqrt{3}\,d^2\,4^{d}/\rho \).
The preceding properties show that problematic places on \(\partial _{h}X\) for the integration have decreasing area. Furthermore, if \(E(\hat{\mathbf {n}}, \mathbf {n}) := \sup _{\mathbf {y}\in \partial _{h}X} \Vert \mathbf {n}(\pi (\mathbf {y}))  \hat{\mathbf {n}}(\mathbf {y}) \Vert \), errors in normal estimation induce propotional errors in surface integration. We can then prove the multigrid convergence of the digital surface integral toward the surface integral.
Theorem 11
The constant involved in the notation O(.) only depends on the dimension d and the reach \(\rho \). Note that Theorems 8 and 10 have shown that there exist normal estimators such that \(E(\hat{\mathbf {n}}, \mathbf {n})\) tends to zero as h tends to zero. Experimental evaluation of the digital surface integral shows a better convergence in practice for area analysis, with convergence speed close to \(O(h^2)\) both for integral normal estimator and digital VCM normal estimator.
References
 [BGCF95]Bullard, J.W., Garboczi, E.J., Carter, W.C., Fullet, E.R.: Numerical methods for computing interfacial mean curvature. Comput. Mater. Sci. 4, 103–116 (1995)CrossRefGoogle Scholar
 [BLM15]Buet, B., Leonardi, G.P., Masnou, S.: Discrete varifolds: a unified framework for discrete approximations of surfaces and mean curvature. In: Aujol, J.F., Nikolova, M., Papadakis, N. (eds.) SSVM 2015. LNCS, vol. 9087, pp. 513–524. Springer, Cham (2015)Google Scholar
 [Bue14]Buet, B.: Approximation de surfaces par des varifolds discrets: représentation, courbure, rectifiabilité. Ph.D. thesis, Université Claude BernardLyon I, France (2014)Google Scholar
 [CCSM11]Chazal, F., CohenSteiner, D., Mérigot, Q.: Geometric inference for probability measures. Found. Comput. Math. 11(6), 733–751 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
 [CK04]Coeurjolly, D., Klette, R.: A comparative evaluation of length estimators of digital curves. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 252–258 (2004)CrossRefGoogle Scholar
 [CLL13]Coeurjolly, D., Lachaud, J.O., Levallois, J.: Integral based curvature estimators in digital geometry. In: GonzalezDiaz, R., Jimenez, M.J., Medrano, B. (eds.) DGCI 2013. LNCS, vol. 7749, pp. 215–227. Springer, Heidelberg (2013)CrossRefGoogle Scholar
 [CLL14]Coeurjolly, D., Lachaud, J.O., Levallois, J.: Multigrid convergent principal curvature estimators in digital geometry. Comput. Vis. Image Underst. 129, 27–41 (2014)CrossRefzbMATHGoogle Scholar
 [CLMT15]Cuel, L., Lachaud, J.O., Mérigot, Q., Thibert, B.: Robust geometry estimation using the generalized voronoi covariance measure. SIAM J. Imaging Sci. 8(2), 1293–1314 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
 [CLR12]Coeurjolly, D., Lachaud, J.O., Roussillon, T.: Multigrid convergence of discrete geometric estimators. In: Brimkov, V.E., Barneva, R.P. (eds.) Digital Geometry Algorithms, Theoretical Foundations and Applications of Computational Imaging. LNCVB, vol. 2, pp. 395–424. Springer, Dordrecht (2012)Google Scholar
 [CLT14]Cuel, L., Lachaud, J.O., Thibert, B.: Voronoibased geometry estimator for 3D digital surfaces. In: Barcucci, E., Frosini, A., Rinaldi, S. (eds.) DGCI 2014. LNCS, vol. 8668, pp. 134–149. Springer, Heidelberg (2014)Google Scholar
 [dVL09]de Vieilleville, F., Lachaud, J.O.: Comparison and improvement of tangent estimators on digital curves. Pattern Recogn. 42(8), 1693–1707 (2009)CrossRefzbMATHGoogle Scholar
 [EP16]Edelsbrunner, H., Pausinger, F.: Approximation and convergence of the intrinsic volume. Adv. Math. 287, 674–703 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
 [Guo10]Guo, J.: On lattice points in large convex bodies. arXiv eprints (2010)Google Scholar
 [HKS15]Hug, D., Kiderlen, M., Svane, A.M.: Voronoibased estimation of minkowski tensors from finite point samples (2015)Google Scholar
 [Hux90]Huxley, M.N.: Exponential sums and lattice points. Proc. Lond. Math. Soc. 60, 471–502 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
 [KN91]Krätzel, E., Nowak, W.G.: Lattice points in large convex bodies. Monatshefte für Mathematik 112, 61–72 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
 [KR04]Klette, R., Rosenfeld, A.: Digital Geometry: Geometric Methods for Digital Picture Analysis. Series in Computer Graphics and Geometric Modeling. Morgan Kaufmann, San Francisco (2004)zbMATHGoogle Scholar
 [KŽ00]Klette, R., Žunić, J.: Multigrid convergence of calculated features in image analysis. J. Math. Imaging Vis. 13, 173–191 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
 [LCLng]Lachaud, J.O., Coeurjolly, D., Levallois, J.: Robust and convergent curvature and normal estimators with digital integral invariants. In: Mdoern Approaches to Discrete Curvature. Lecture Notes in Mathematics. Springer International Publishing (2016, forthcoming)Google Scholar
 [LT15]Lachaud, J.O., Thibert, B.: Properties of gauss digitized sets and digital surface integration. J. Math. Imaging Vis. 54(2), 162–180 (2016)MathSciNetCrossRefGoogle Scholar
 [M99]Müller, W.: Lattice points in large convex bodies. Monatshefte für Mathematik 128, 315–330 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
 [MOG11]Mérigot, Q., Ovsjanikov, M., Guibas, L.: Voronoibased curvature and feature estimation from point clouds. IEEE Trans. Visual. Comput. Graph. 17(6), 743–756 (2011)CrossRefGoogle Scholar
 [PWHY09]Pottmann, H., Wallner, J., Huang, Q., Yang, Y.: Integral invariants for robust geometry processing. Comput. Aided Geom. Des. 26(1), 37–60 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
 [PWY+07]Pottmann, H., Wallner, J., Yang, Y., Lai, Y., Hu, S.: Principal curvatures from the integral invariant viewpoint. Comput. Aided Geom. Des. 24(8–9), 428–442 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
 [SLS07]Stelldinger, P., Latecki, L.J., Siqueira, M.: Topological equivalence between a 3D object and the reconstruction of its digital image. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 126–140 (2007)CrossRefGoogle Scholar