A Novel Template-Based Approach to the Segmentation of the Hippocampal Region

  • M. AielloEmail author
  • P. Calvini
  • A. Chincarini
  • M. Esposito
  • G. Gemme
  • F. Isgrò
  • R. Prevete
  • M. Santoro
  • S. Squarcia
Part of the Computational Methods in Applied Sciences book series (COMPUTMETHODS, volume 19)


The work described in this document is part of a major work aiming at a complete pipeline for the extraction of clinical parameters from MR images of the brain, for the diagnosis of neuro-degenerative diseases. A key step in this pipeline is the identification of a box containing the hippocampus and surrounding medial temporal lobe regions from T1-weighted magnetic resonance images, with no interactive input from the user. To this end we introduced in the existing pipeline a module for the segmentation of brain tissues based on a constrained Gaussians mixture model (CGMM), and a novel method for generating templates of the hippocampus. The templates are then combined in order to obtain only one template mask. This template mask is used, with a mask of the grey matter of the brain, for determining the hippocampus. The results have been visually evaluated by a small set of experts, and have been judged as satisfactory. A complete and exhaustive evaluation of the whole system is being planned.


Magnetic resonance Image analysis Hippocampus segmentation 

1 Introduction

The hippocampus is a structure of the Medial Temporal Lobe (MTL) that plays an essential role in the learning and memory capabilities of the brain. It is involved in Alzheimer’s disease (AD) and other neurodegenerative diseases (see [7, 12, 17, 22]). For this reason the hippocampus is targeted in the analysis of neuro-images. In particular, the analysis of the shape of the hippocampus is a well accepted indicator in the diagnosis of the Alzheimer’s disease, and the analysis of its changes over time can be useful for controlling the evolution of the disease. The problem to find relations between the hippocampus and clinical variables has been studied for some time already: [1, 2, 5, 11, 20].

A step preliminary to the analysis of the shape of the hippocampus is the segmentation of the hippocampus in MRIs, that in the last years has been tackled by many researchers following different approaches. Among the various works more interesting to the realisation of an automated system are two classes of methods: semi-automated and fully automated methods.

An interesting example of semi-automated method is described in [15]: a pipeline that combines the use of low-level image processing techniques such as thresholding and hole-filling with the technique of the geometric deformable models [13, 16]. This process is regarded as semi-automated since the starting step is the labelling of some points on the hippocampal contour by the user; this is necessary to constrain the segmentation process to avoid the inclusion of some other grey matter structures, as the amigdala, in the hippocampal region. This kind of intervention by an expert is also present in [18].

A fully automated approach to the hippocampus segmentation based on the a priori anatomical knowledge of the hippocampus is described in [6]. The a priori knowledge is modelled by statistical information on the shape [14, 18] and by the deformation on a single atlas [8].

The approaches just mentioned are too sensible to the initialisation step, in the first case (i.e. the semi-automated methods), the labelling of the deformation constraints (landmarks) by the user is not exhaustive to describe the morphological variability of the hippocampal shape and is sensible to the specific MRI modality.

In the case of the fully automated method, the use of a single atlas to obtain the a priori knowledge not take in account the anatomical variability between various degrees of the atrophy of the hippocampus.

The work presented in this document is part of a pipeline, see [3, 4], the goal of which is the extraction of clinical variables from the hippocampus automatically segmented in MR images. What is presented in this document is a method for the segmentation of the brain tissues based on a constrained Gaussians mixture model, and a method for the generation of templates of the hippocampus that is novel respect to what was done in the pipeline presented in previous papers [3, 4].

In the remainder of this document first we will review the system implemented (Sect. 2); Sect. 3is dedicated to the brain tissues segmentation module based on a constrained Gaussian mixture model; then we will give details of the novel procedure for generating the hippocampus templates (Sect. 4), and finally we will show some results; the last section is left to some final remarks.

2 The Pipeline

In this section we briefly review how the pipeline implemented works. Further details can be found in [3, 4].

The pipeline consists of three main modules. The first one extracts from the MR two boxes containing the left and the right hippocampus plus a portion of the adjacent tissues and cavities. The second module, still under development, performs the automatic segmentation of the hippocampus, using a set of template masks manually segmented by expert radiologists. The last one, that is being studied, is dedicated to the computation of clinical variables that are related to the atrophic state of the hippocampus.

The pipeline accepts MR images and extracts two hippocampal boxes (HB) containing respectively the left and the right hippocampus, plus a portion of the adjacent tissues and cavities. This is achieved by a rigid registration between the input MR and a set of template boxes previously determined. These template boxes result from a rather long and computational intensive extraction procedure, described hereafter.

The templates extraction basically relies on the fact that the grey level contrast displayed by the complex hippocampal formation plus contiguous ventricles and adjacent structures is so characteristic as to be unique all over the brain. No other structure exists in the brain mimicking the same grey level distribution. Therefore, a procedure can be prepared which, on the basis of some suitably chosen examples, is able to identify the hippocampal region unambiguously.

2.1 Images Dataset

The data-set of structural scans consists of ≈ 100 T1–TFE volumetric MR images, made available by the National Institute of Cancer (IST), Genoa, Italy, (1. 5T Gyroscan system, Philips Medical System) from a population of elderly people whose age, sex, neuropsychological evaluation and clinical evaluation (Normal, Mild Cognitive Impairment or MCI and AD) is known (ages between 56 and 86 years, 35% males, 65% females, clinical conditions ranging from good health state to dementia of AD type. The diagnosis of MCI or AD was made according to the Petersen’s and the NINCDS–ADRDA criteria respectively and the images were acquired with isotropic voxels of 1 mm side. Neuroanatomical considerations, verified by the inspection of some images of the data-set, permit to decide the size of a Hippocampal Box (HB) as a 30 ×70 ×30 mm3parallelepiped-shaped box (sizes of right-to-left, posterior-to-anterior and inferior-to-superior directions respectively). The hippocampal formation and a fraction of the adjacent structures can be easily contained in a HB of that size, provided the original MR image is given the correct orientation by rotating it mainly on the sagittal plane. The extraction of the 200 HBs (100 right and 100 left HBs) from the set of the 100 available MR images was performed by applying a procedure which requires minimal interactive intervention. As an example, let us briefly illustrate the process for the extraction of the right HBs, the procedure for the extraction of the left ones being the same. Firstly, the images are spatially normalised to stereotactic space (ICBM152, see Fig. 1) via a 12-parameters affine transformation [9] which normalises the images so that the hippocampi all share a similar position and orientation.

(a) Axial, (b) sagittal, and (c) coronal views of an image after alignment with the ICBM152 template. On the three slices an outline of the right hippocampal box is also shown

2.2 Extraction of the Hippocampal Boxes

The extraction procedure is based on the registration of the first, manually defined HB (denoted in the following as the fixedimage) on the 100 images (the movingimages) via a 6-parameters rigid transform. The registration procedure is based on the definition of a distance between two HBs. The distance provides a measure of how well the fixed image is matched by the transformed moving image. This measure forms the quantitative criterion to be optimised over the search space defined by the parameters of the transform. In our procedure we adopted a definition of distance given in terms of the normalised correlation coefficient \(\mathcal{C}\,\). Assigned the HBs Aand B, each one consisting of Nvoxels (N= 63, 000 in our case), the corresponding \(\mathcal{C}\)is given by
$${\mathcal{C}}_{A,B} = \frac{{\sum \nolimits }_{\alpha =1}^{N}\left ({A}_{\alpha } -\overline{A}\right )\left ({B}_{\alpha } -\overline{B}\right )} {\sqrt{{\sum \nolimits }_{\alpha =1}^{N}{\left ({A}_{\alpha } -\overline{A}\right )}^{2}}\sqrt{{\sum \nolimits }_{\alpha =1}^{N}{\left ({B}_{\alpha } -\overline{B}\right )}^{2}}}\,$$
where A αand B α, of HBs Aand Bare voxel intensities and the average intensity \(\overline{I}\)is given by
$$\overline{I} = \frac{1} {N}{\sum \nolimits }_{\alpha =1}^{N}{I}_{ \alpha }\,$$
where I= A,B.
From this quantification of similarity, one derives the following definition of distance between Aand B
$${d}_{A,B} = 1 -{\mathcal{C}}_{A,B}$$
which will be used in the following. This distance is insensitive to multiplicative factors between the two images and produces a cost function with sharp peaks and well defined minima. On the other hand, it has a relatively small capture radius. This last feature, however, is not a severe limitation in our case because all the images are aligned in the same stereotactic space and the hippocampal formations span a relatively small parameter space.

The success of the registration of each moving image to the fixed image is quantified by the minimum reached in distance values. With a moderate computational effort, one could extract all the 100 remaining right HBs by using the first manually defined HB alone, but the quality of the results is not homogeneously good. In fact the fixed image is successful in extracting the HBs which are not too dissimilar from it. However, due to the ample morphological variability contained in the population of MR images, some HBs exist which are unsatisfactorily extracted or not found at all. Therefore, a more exhaustive approach is required.

The population of the remaining 99 MR images is registered to the fixed image. Thus, for each given image j(2 ≤ j≤ 99) this operation produces the value of the score d 1, j , stored in the first row of an upper triangular matrices. We remark that no actual HB extraction is performed at this stage. On the basis of the presently available score list (the first row of matrix d), the second box is extracted from MR\({}_{({j}^{{_\ast}})}\)where j is the index of the minimum (non zero) value of δ1, j . Once the second HB is available, the remaining 98 moving images are registered on this new fixed image, and a new set of scores are obtained and stored in the second row of the matrix. The second extracted HB is selected from MR\({}_{({k}^{{_\ast}})}\)where now k is given by the index of the minimum (non zero) value of d 1, j and d 2, j , not taking into account the scores of the already extracted HBs. The procedure for the progressive extraction of all HBs follows this scheme and the extension to an increasing number of HB examples is obvious.

The illustrated procedure is able to detect hippocampi at any atrophy stage and to extract the corresponding HBs. Except for the extraction of the first HB, the whole process runs automatically, without requiring any manual intervention, and no appreciable drift affecting hippocampus orientation or positioning in the HB is noticeable during the extraction process. Visual assessment by an expert reader of the whole set of 100 exhaustive extracted HBs shows that the level of spatial registration of similar anatomical structures is very high. Such stability is not surprising if one considers the way the whole procedure works. At the beginning, the early extractions exhaust the set of the HBs which are very similar to the manually defined HB. Then, the procedure starts extracting HBs which are progressively different from the first ones, but diversity creeps into the growing HB database very slowly thanks to the relevant size of the population of the available MR images. Thus the orientation and position of the essential geometrical features of the hippocampal formation are preserved during the whole process of HB extraction.

2.3 Selection of Templates

As far as computational costs are concerned, this procedure is rather demanding and it is unreasonable to run it over again for extracting the two HBs of any new MR image. Instead of proposing scenarios of nHBs hunting for the n+ 1th HB, we show that the same hunt can be successfully performed by a decidedly smaller number of properly chosen HBs, in the following named HB Templates (HBTs). The HBTs are selected among all HBs as representative cases of the wide morphological variability of the MTL in a large population of elderly people. This idea is consistent with the fact that in the research field on atrophy progression affecting the medial temporal lobe usually only five scores are considered on the basis of visual MRI inspection: absent, minimal, mild, moderate and severe.

The basic idea of the HBTs selection process is to create groups of HBs, or clusters. To classify the nHBs in homogeneous clusters we used hierarchical clustering. The centroid for each cluster is then used as representative (template). To determine the optimal number of templates we increased the number of clusters k, starting from k= 3 to k= 20. We then evaluated the performance of the different sets of HBTs in extracting all the n= 100 right HBs. The test consisted in extracting all nHBs from all MR images given the set of kHBTs. Each MR image was registered to all the kHBTs and actual HB extraction was performed on the basis of the best score obtained (among the kavailable scores). The test was repeated for \(k = 3/20\). The procedure generated eighteen sets of nHBs (for each \(k = 3/10\)) to be compared to the original set extracted with the exhaustive procedure. To quantify the capability of the HBT sets in extracting the nHBs, we calculated the average distance between the newly extracted elements and the original ones. As the average distance decreases as the number of templates gets larger and larger, we chose the minimum number of templates whose extraction performance (average distance) is less than a given threshold.

With these templates we find the right and left hippocampal formations in any new MR image, using statistical indicators to assess the precision on this volume extraction. MR images ranging from normality to extreme atrophy can be successfully processed. We plan to obtain a set of clinical parameters useful for monitoring the progress of the disease from the analysis of the hippocampal formations.

A different analysis performed directly on the hippocampal boxes can allow the diagnosis of the disease. The boxes are analysed both with linear and non linear methods such as Voxel Based Morphometry and neural networks classifiers. The computed features are chosen to maximise the area under the ROC curve between Normal and AD cohorts. The same features are then used to classify MCI patients into likely AD converters and non-converters. The procedure predictions are subsequently verified by clinical follow-ups data, and the sensitivity/specificity against early detection of AD is computed.

3 Constrained Gaussian Mixture Model Segmentation oftheBrain

The constrained Gaussian mixture model (CGMM henceforth) is a procedure for the segmentation of the tissues of the brain based on a constrained Gaussian mixture modelling of the voxel’s intensity distribution constrained by the spatial distribution of the tissues. In particular, the voxels distribution can be modelled as
$$f\left ({v}_{t}\vert \Theta \right ) ={ \sum \nolimits }_{i=1}^{n}{\alpha }_{ i}{f}_{i}\left ({v}_{t}\vert {\mu }_{i},{\Sigma }_{i}\right )$$
where v t is the space-intensity feature vector associated to the t-th voxel, nis the number of Gaussians components of the mixture, μ i , Σ i e α i are, respectively, mean, covariance matrix and mixture coefficient of the i-th Gaussian f i . Further technical details can be found in [10].
To adapt the CGMM framework to our case where we need to segment grey matter in the hippocampal box the following parameters must be set:
  • Number of tissues in which we want to segment the MRI box, in our case we are interested to cerebrospinalfluid (CSF), grey matter (GM) and white matter (WM).

  • Number of Gaussians used for modelling the distribution of the intensity feature.

  • Number of clusters associated to each Gaussian.

  • Number of Gaussians which model each tissue.

In order to find the best values for this parameters it has been adopted a strategy based on the analysis of boxes histogram. Let us observe the four histograms shown in Fig. 2: they do show a similar grey levels distribution for the four HBs. Thissuggests that it can be suitable to use a number of Gaussians equal to the number of clusters. A discussion on the experimentalchoice of the best values is postponed to Sect. 5.

(ad): Grey level histograms of four different Hippocampal Boxes

4 Hippocampal Mask Template Generation

Since there is no statistical atlas of the hippocampus, a set of template masks, i.e. HBs where the hippocampus has been manually segmented, can be used for the segmentation. What we present here is a method for combining the set of template masks in order to obtain only one template mask. The template mask thus found is then used together with a mask of the grey-matter for determining the hippocampus in the input hippocampal box. The grey-matter mask is obtained by a three class classifier (white matter, grey matter). Our pipeline includes different classifiers for performing this task.

More precisely our problem is to create only one hippocampal mask template from a set M= { M 1, M 2, , M n } of nmanually segmented raw hippocampal boxes (RHB) [3], and use this derived mask for the hippocampus segmentation from the current hippocampal box H 0.

The first step is to warp all the M i s onto H 0(using the Thirion’s Demons registration algorithm [19]). This produces a set of nmorphed RHB M {M 1 , M 2 , , M n } and a set of nvectorial fields which can measure the magnitude of the deformation. At this point we want to generate the template representative of the set M′. Toachieve this we adopted the STAPLE algorithm.


STAPLE (Simultaneous Truth and Performance Level Estimation) [21] is an algorithm for the simultaneous ground truth and level performance of various classifiers. A probabilistic estimation of the ground truth is produced with an optimal combination of all classifiers weighting any classifier with his performance level. The algorithm is formulated as an instance of Expectation Maximisation ( EM) where:

  • The segmentation of any classifier for all voxels is an observable.

  • The “true” segmentation is an hidden binary variable for all voxels.

  • The performance level is represented by the sensitivityand specificityparameters:
    • sensitivity p: true positives rate.

    • specificity q: true negatives rate.

The Algorithm

Let us consider a volume of Nvoxel and a set of Rbinary segmentations of this volume, let us assume that:
  • p= (p 1, p 2, , p R ) T is a column vector Relements, where each element is the sensitivity of the corresponding classifier.

  • q= (q 1, q 2, , q R ) T is a column vector Relements, where each element is the specificity of the corresponding classifier.

  • Dis a matrix N×Rof the classifiers decisions for any of Nvoxel of the Rsegmentations.

  • Tis a vector of Nelements which represent the hidden true segmentation.

Now, we can consider:
  • (D, T): the complete data.

  • f(D, Tp, q): the probability function of the complete data.

We want to determine the pairs (p, q) maximising the logarithm of the likelihood of the complete data:
$$(\hat{p},\hat{q}) {=\arg \max }_{p,q}\ln f(D,T\mid p,q)$$
Now let be:
  • θ j = (p j , q j ) T   ∀j∈ 1…R: the performance parameters of the classifier j.

  • θ = [θ1θ2 θ R ]: the complete set of performance parameters.

  • f(D, T∣θ) : the probability function of the vector of the complete data.

$$\ln {L}_{c}\{\theta \} =\ln f(D,T\mid \theta )$$
the EM algorithm can solve this problem:
$$\ln L\{\theta \} =\ln f(D\mid \theta )$$

This is solved iterating the following EM steps:

estimation of Q(θ∣θ(k)) = ∑ T lnf(D, T∣θ)f(TD, θ k ).

A compact expression for this step is:
$$\begin{array}{rcl}{ W}_{i}^{(k)} \equiv f\left ({T}_{ i} = 1\mid {D}_{i},{p}^{(k-1)},{q}^{(k-1)}\right ) = \frac{{a}_{i}^{(k-1)}} {{a}_{i}^{(k-1)} + {b}_{i}^{(k-1)}}& &\end{array}$$
where W i (k)indicate the probability of the voxel i, on the true segmentation, be labelled as 1. a i (k− 1)and b i (k− 1)are defined as follow:
$$\begin{array}{rcl}{ a}_{i}^{(k-1)}& \equiv & f({T}_{ i} = 1){\prod \nolimits }_{j}f\left ({D}_{ij}\mid {T}_{i} = 1,{p}_{j}^{(k-1)},{q}_{ j}^{(k-1)}\right ) \\ & =& f({T}_{i} = 1){\prod \nolimits }_{j:{D}_{ij}=1}{p}_{j}^{(k-1)}{ \prod \nolimits }_{j:{D}_{ij}=0}\left (1 - {p}_{j}^{(k-1)}\right ) \\ \end{array}$$
$$\begin{array}{rcl}{ b}_{i}^{(k-1)}& \equiv & f({T}_{ i} = 0){\prod \nolimits }_{j}f\left ({D}_{ij}\mid {T}_{i} = 0,{p}_{j}^{(k-1)},{q}_{ j}^{(k-1)}\right ) \\ & =& f({T}_{i} = 0){\prod \nolimits }_{j:{D}_{ij}=0}{q}_{j}^{(k-1)}{ \prod \nolimits }_{j:{D}_{ij}=1}\left (1 - {q}_{j}^{(k-1)}\right ) \\ \end{array}$$
where j: D ij = 1 is the set of indexes for which the decision of classifier jfor the voxel iis 1; f(T i = 1) and f(T i = 0) are the a priori probabilities of T i = 1 and T i = 0 respectively; \(f({D}_{ij}\mid {T}_{i} = 1,{p}_{j}^{(k-1)},{q}_{j}^{(k-1)})\)represents the conditional probability of the jth labeling for the voxel igiven the parameters and the true segmentation label equal to 1.
maximisation of Q(θ∣θ(k)) on the space of the parameters θ, i.e., θ(k+ 1)such that:
$$Q({\theta }^{(k+1)}\mid {\theta }^{(k)}) \geq Q(\theta \mid {\theta }^{(k)})\quad \forall \theta $$
Using W i (k− 1)estimated in the E-step, is possible to find the optimal parameters by this formulae:
$$\begin{array}{rcl}{ p}_{j}^{(k)}& =& \frac{{\sum \nolimits }_{i:{D}_{ij}=1}{W}_{i}^{(k-1)}} {{\sum \nolimits }_{i}{W}_{i}^{(k-1)}} \\ {q}_{j}^{(k)}& =& \frac{{\sum \nolimits }_{i:{D}_{ij}=0}\left (1 - {W}_{i}^{(k-1)}\right )} {{\sum \nolimits }_{i}\left (1 - {W}_{i}^{(k-1)}\right )} \end{array}$$

4.2 Our Strategy

Initialisation Strategy

A blind use of the above procedure for determining a single template mask can lead to errors. This is because each one of the original template masks M i is representative of a class of hippocampus, therefore using all of them on each input H 0is not correct. We solve this problem taking into account the different degree of deformation of each M i on H 0to weight the contribution of each template mask M i .

The registration step produces a deformation matrix which can be viewed as composed by deformation vectors (see Fig. 3).

Deformation matrix. The vector magnitude increases from dark to light

According to the deformation matrix of an HB template we can measure its similarity to the hippocampal box H 0. For this purpose, we use the average modulus of the vector field and these values, are translated in specificity and sensitivity values (p i 0,q i ) using the following rules:
  • To the HB template with lower average modulus (minHB) will be assigned the following values: \({p}_{\mathit{best}}^{0} = {q}_{\mathit{best}}^{0} = 0.99\).

  • To the HB template with higher average modulus (maxHB) will be assigned the following values: \({p}_{\mathit{worst}}^{0} = {q}_{\mathit{worst}}^{0} = 0.01\).

  • To the remaining HB template will be assigned the values according to the following formula : ∀ibest, iworst, i= 1, , classifiers
    $${p}_{i}^{0} = {q}_{ i}^{0} = 0.99 - \frac{{\mathit{HB}}_{i} -\mathit{minHB}} {\mathit{maxHB} -\mathit{minHB}}.$$
In Table 1we show the pairs (p, q) calculated for eight HB templates. In the our case, differently respect to the strategy described in [21] we don’t need to initialise the initial reference W 0with the average volume of the RHB, then we set W 0to a zero matrix. In Table2we show a summary of the algorithm for the template generation procedure.

pand qcalculated according to our strategy


Mean norm




HB Template 1





HB Template 2





HB Template 3





HB Template 4





HB Template 5





HB Template 6





HB Template 7





HB Template 8





Summary of the algorithm

Convergence Check

The use of the EM algorithm guarantees the convergence to a local minimum: since STAPLE estimates both the ground truth and the level performance of the classifiers, then convergence can be controlled simply monitoring these three variables. A simple measure of convergence is the variation of the sum over the voxels of the probability of the true segmentation
$${S}_{k} ={ \sum \nolimits }_{i=1}^{N}{W}_{ i}$$
computed for each iteration k. The iterations can be stopped when the difference \({S}_{k} - {S}_{K-1}\)is small enough.

Model Parameters

The selection of different a priori probabilities f(T i = 1), that can vary spatially or globally, can move the local maxima to which the algorithm converges. A probability f(T i = 1) that changes spatially is a good choice for those structures for which a probabilistic atlas is available.

The interest in our case is oriented towards those structure for which non probabilistic atlas is available: in this case we can use a single global a priori probability \(\gamma = f({T}_{i} = 1)\forall i\). Such probability encodes all the available information before knowing the results of the segmentations relative to the probability of the structure we want to segment. In practise such information are not easily available, and therefore it is more convenient to estimate γ form the segmentations, as
$$\gamma = \frac{1} {RN}{\sum \nolimits }_{j=1}^{R}{ \sum \nolimits }_{i=1}^{N}{D}_{ ij}$$

5 Experimental Assessment

In this section we first discuss how the number of Gaussians for the CGMM segmentation module has been experimentally determined, and the we show a result of the hippocampal mask template generation module.

The hippocamp is composed by GM which has mean value of the intensity between the mean value of CSF and the mean value of the WM. To find a good number of Gaussians for modelling the histograms we note that the right number is larger than three. In Fig. 4the histograms obtained as a Gaussian mixture of the histogram in Fig. 4a using an increasing number of Gaussians for the model. Here we focus our attention only on the histogram of a particular HB, but we found that the same arguments hold for any other HB.

Histogram modelling with the Gaussian mixture model. (a) Original histogram; (b) Result using three Gaussians; (c) Result using four Gaussians; (d) Result using five Gaussians; (e) Result using six Gaussians; (f) Result using seven Gaussians

First of all we can note that the use of three (Fig. 4b) or four (Fig. 4c) Gaussians leads to a bad matching. A much better result is obtained when using five Gaussians (Fig. 4d). In particular, moving from left to right on the x axis, we can assign the first Gaussian to the CSF, the second and the third to the GM. More difficult is to decide if the fourth Gaussian should be assigned to the GM or to the WM. In fact, if it is assigned to the GM we risk to over-sample the GM; on the other hand if it is assigned to the WM, then the GM might be under estimated, leading to errors in the hippocampus segmentation, as the hippocampus is part of the grey matter. The extraction process makes use of the GM segmentation for refining the results from the STAPLE module, that often expands on areas that have not been classified as grey matter. Therefore, it is much better to assign more Gaussians to the grey matter.

In Fig. 4f the outcome of modelling the histogram using seven Gaussians. The results is better than when using only five Gaussians, however there is the problem if the second Gaussian from the left is assigned to the CSF or to the GM should be assigned to the CSF or to the GM: either ways we end up with a down-sampling or a over-sampling of the GM respect to the CSF.

The result when using six Gaussians is shown in Fig. 4e: in this case there is no uncertainty for assigning the Gaussians on the left of the histogram, as for the case of Fig. 4f. Therefore the best choice is to model the histogram of the hippocampal box using six histogram. The result of CGMM segmentation of a HB using the parameter reported in Table 3is shown in Fig. 5.

Parameters of the CGMM segmentation used for our experiments


Number of clusters

Number of Gaussians














Left: Sagittal view of a HB. Right: Segmentation obtained by the CGMM framework with the parameters in Table 3

We used the hippocampus template determined with the method described above to actually refine a segmentation of the grey matter obtained with the constrained mixture of Gaussians method described earlier in this chapter.

Since there is no ground truth, the results have been visually evaluated by a small set of experts, and have been judged as satisfactory. However we are planning a complete and exhaustive evaluation of the whole pipeline. Results are shown in Figs. 68.

(a) Box HB; (bi) Template boxes M i s warped onto the box HB


3D rendering of the Hyppocampal box template in Fig. 7. The meshindicates the STAPLE result which cover the grey matter segmentation of the hyppocampal box


Hippocampal Mask Template obtained combining the eight template M i in Fig. 6

6 Conclusions

In this work it is presented a novel method for the segmentation of the hippocampus in MR images based on the use of template masks. Our method is implemented in a pipeline that aims to perform an automated analysis of the hippocampus starting from his segmentation to perform a morphological analysis that can be very useful for the early diagnosis of neurodegenerative diseases, such as the Alzheimer’s disease. The main idea behind the pipeline is the use of the side effect of the STAPLE algorithm to produce a single, meaningful template which can refine a rough segmentation produced by a segmentation algorithm based on the CGMM framework. The next steps of our work aim to get a ground truth of segmented hippocampi to proceed to the validation of the whole pipeline and, consequently, aim to the implementation of a strategy to produce some parameters, arising from the morphological analysis of the hippocampus, such as clinical scores.



This work was partially funded by INFN within the MAGIC-5 research project.


  1. 1.
    Atiya, M., Hyman, B., Albert, M., Killiany, R.: Structural magnetic resonance imaging in established and prodroaml Alzheimer disease: a review. Alz. Dis. Assoc. Dis. 17(3), 177–195 (2003)CrossRefGoogle Scholar
  2. 2.
    Braak, H., Braak, E.: Neuropathological staging of Alzheimer-related changes. Acta. Neuropathol. 82, 239–259 (1991)CrossRefGoogle Scholar
  3. 3.
    Calvini, P., Chincarini, A., Donadio, S., Gemme, G., Squarcia, S., Nobili, F.N., Rodriguez, G., Bellotti, R., Catanzariti, E., Cerello, P., Mitri, I.D., Fantacci, M.: Automatic localization of the hippocampal region in mr images to asses early diagnosis of Alzheimer’s disease in MCI patients. In: Nuclear Science Symposium Conference Record, pp. 4348–4354 (2008)Google Scholar
  4. 4.
    Calvini, P., Chincarini, A., Gemme, G., Penco, M., Squarcia, S., Nobili, F., Rodriguez, G., Bellotti, R., Catanzariti, E., Cerello, P., Mitri, I.D., Fantacci, M.: Automatic analysis of medial temporal lobe atrophy from structural MRIs for the early assessment of Alzheimer disease. Med. Phys. 36(8), 3737–3747 (2009)CrossRefGoogle Scholar
  5. 5.
    Chetelat, G., Baron, J.: Early diagnosis of Alzheimer disease: contribution of structural neuroimaging. Neuoroimage 18(2), 525–541 (2003)CrossRefGoogle Scholar
  6. 6.
    Chupin, M., Chételat, G., Lemieux, L., Dubois, B., Garnero, L., Benali, H., Eustache, F., Lehéricy, S., Desgranges, B., Colliot, O.: Fully automatic hippocampus segmentation discriminates between early Alzheimer’s disease and normal aging. In: Proceedings of the 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 97–100 (2008)Google Scholar
  7. 7.
    Dubois, B., Albert, M.: Amnestic MCI or prodromal Alzheimer disease? Lancet Neurol. 3(4), 246–248 (2004)CrossRefGoogle Scholar
  8. 8.
    Duchesne, S., Pruessner, J.C., Collins, D.L.: Appearance-based segmentation of medial temporal lobe structures. Neuroimage 17(2), 515–531 (2002)CrossRefGoogle Scholar
  9. 9.
    Evans, A., Kambler, M., Collins, D., MacDonald, D.: An MRI-based probabilistic atlas of neuroanatomy. In: Shorvon, S., Fish, D., Andermann, F., Bydder, G., Stefan, H. (eds.) Magnetic Resonance Scanning and Epilepsy. NATO ASI Series A, Life Sciences, vol. 264, pp. 263–274. Plenum Press, New York (1994)Google Scholar
  10. 10.
    Greenspan, H., Ruf, A., Goldberger, J.: Constrained Gaussian mixture model framework for automatic segmentation of MR brain images. IEEE Trans. Med. Imag. 25(9), 1233–1245 (2006)CrossRefGoogle Scholar
  11. 11.
    Joshi, S., Pizer, S., Fletcher, P., Yushkevich, P., Thall, A., Marron, J.: Multiscale deformable model segmentation and statistical shape analysis using medial description. IEEE Trans. Med. Imag. 21(5), 538–550 (2002)CrossRefGoogle Scholar
  12. 12.
    Kantarci, K., Jack, C.: Neuroimaging in Alzheimer disease: an evidence based review. Neuroimag. Clin. N. Am. 13(2), 197–209 (2003)CrossRefGoogle Scholar
  13. 13.
    Kass, W., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4), 321–331 (1988)CrossRefGoogle Scholar
  14. 14.
    Kelemen, A., Székely, G., Gerig, G.: Elastic model-based segmentation of 3D neurological data sets. IEEE Trans. Med. Imag. 18(10), 828–839 (1999)CrossRefGoogle Scholar
  15. 15.
    Lee, W., Gow, K., Zhao, Y., Staff, R.: Hybrid segmentation of the hippocampus in mr images. In: Proceedings of the 13th European Signal Processing Conference, Antalya, Turkey (2005)Google Scholar
  16. 16.
    Miller, J., Lorensen, W., O’Bara, R., Wozny, M.: Geometrically deformed models: A method for extracting closed geometric models from volume data. SIGGRAPH Comput. Graph. 25, 217–226 (1991)CrossRefGoogle Scholar
  17. 17.
    Petersen, R., Smith, G., Waring, S., Tangalos, E., Kokmen, E.: Mild cognitive impairment: clinical characterization and outcome. Arch. Neurol. 56(3), 303–308 (1999)CrossRefGoogle Scholar
  18. 18.
    Shen, D., Moffat, S., Resnick, S.M., Davatzikos, C.: Measuring size and shape of the hippocampus in MR images using a deformable shape model. NeuroImage 15, 422–434 (2002)CrossRefGoogle Scholar
  19. 19.
    Thirion, J.: Image matching as a diffusion process: an analogy with maxwell’s demons. Med. Image Anal. 2(3), 243–260 (1998)CrossRefGoogle Scholar
  20. 20.
    Thompson, P., Hayashi, K., deZubicaray, G., Janke, A., Rose, S., Semple, J., Hong, M., Herman, D., Gravano, D., Doddrell, D., Toga, A.: Mapping hippocampal and ventricular change in Alzheimers’s disease. Neuroimage 22(4), 1754–1766 (2004)CrossRefGoogle Scholar
  21. 21.
    Warfield, S., Zou, K., Wells, W.: Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation. IEEE Trans. Med. Imag. 23, 903–921 (2004)CrossRefGoogle Scholar
  22. 22.
    Winblad, B., Palmer, K., Kivipelto, M., Jelic, V., Fratiglioni, L., Wahlund, L., Nordberg, A., Backman, L., Albert, M., Almkvist, O., Arai, H., Basun, H., Blennow, K., deLeon, M., DeCarli, C., Erkinjuntti, T., Giacobini, E., Graff, C., Hardy, J., Jack, C., Jorm, A., Ritchie, K., van Duijn, C., Visser, P., Petersen, R.: Mild cognitive impairment–beyond controversies, towards a consensus: report of the international working group on mild cognitive impairment. J.Intern. Med. 256(3), 240–246 (2004)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  • M. Aiello
    • 1
    Email author
  • P. Calvini
    • 2
  • A. Chincarini
    • 3
  • M. Esposito
    • 1
  • G. Gemme
    • 3
  • F. Isgrò
    • 1
  • R. Prevete
    • 1
  • M. Santoro
    • 4
  • S. Squarcia
    • 2
  1. 1.Dipartimento di Scienze FisicheUniversità di Napoli Federico II, and Istituto Nazionale di Fisica NucleareSezione di NapoliItaly
  2. 2.Dipartimento di FisicaUniversità di Genova, and Istituto Nazionale di Fisica NucleareSezione di GenovaItaly
  3. 3.Istituto Nazionale di Fisica NucleareSezione di GenovaItaly
  4. 4.Dipartimento di Informatica e Scienze dell’InformazioneUniversità di GenovaGenovaItaly

Personalised recommendations