Advertisement

Brain Tumor Segmentation with Optimized Random Forest

  • László LefkovitsEmail author
  • Szidónia Lefkovits
  • László Szilágyi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10154)

Abstract

In this paper we propose and tune a discriminative model based on Random Forest (RF) to accomplish brain tumor segmentation in multimodal MR images. The objective of tuning is meant to establish the optimal parameter values and the most significant constraints of the discriminative model. During the building of the RF classifier, the algorithm evaluates the importance of variables, the proximities between data instances and the generalized error. These three properties of RF are employed to optimize the segmentation framework. At the beginning the RF is tuned for variable importance evaluation, and after that it is used to optimize the segmentation framework. The framework was tested on unseen test images from BRATS. The results obtained are similar to the best ones presented in previous BRATS Challenges.

Keywords

Random forest Feature selection Variable importance Statistical pattern recognition MRI segmentation 

1 Introduction

MR imaging and diagnosis is increasingly used from day to day with the world-wide spread of MR equipment. In many cases, with the help of a correct diagnosis, complicated surgery could be avoided or medical staff could be better prepared for the intervention. One part of these techniques is built around automatic image segmentation. In order to facilitate a faster diagnosis, a robust and reliable automatic segmentation system is needed. In this paper we propose a discriminative model for brain tumor segmentation in multimodal MRI. The main goal of the model is the evaluation and selection of low-level image features and the optimization of the Random Forest (RF) classifier for the segmentation task. This is achieved in two phases: first, we start with the optimization of RF considering variable importance evaluation. The second phase is the optimization of the RF structure in order to improve segmentation performance. Many discriminative segmentation models were proposed and tested in the previous four BRATS Challenges (2012–2015) [4]. The selection of the employed features was based on the intuition and experience of the authors. The exact definition and usage of the applied features in their segmentation systems remains a secret. Usually, the systems work with large sets of features having hardly any theoretical or practical analysis behind their usefulness.

In the following we will present the best-performing systems based on a discriminative model used in multimodal MR tumor segmentation.

Zikic [16] et al. and their research team from Microsoft created a discriminative model that extracts the attributes from the image intensities as well as from a generative model. In their approach, 2000 context-aware attributes are defined. As a classification ensemble, they use 40 decision trees, each having a depth of 20. Geremia et al. [7] built a discriminative model that associates a vector of 412 features to each point. The classification algorithm is an ensemble of decision trees trained on a set of images containing 20 High Grade (HG) and 10 Low Grade (LG) images. Goetz et al. [8] that uses 208 attributes; 52 attributes for each of the 4 image types. The classifier is made up of an ensemble of Extra-Randomized Trees (ERT). Reza and Iftekharuddin [14] created a discriminative model which only processes planar images that are axial sections of 3D MRI. This model uses no apriori information about the anatomical structure of the brain. The system works only with the intensity information of the pixels in multimodal images, extracting special attributes based on texton, textures and fractal dimension. The classification algorithm is again the RF. The final decision is made by weighted voting. Remarkable performance is obtained due to texture information.

A more reliable model can be built selecting the variable importance from the point of view of classification. An adequate feature set comes with the following advantages:
  • increases the predictive accuracy of the classifier;

  • reduces the cost of data collection;

  • enhances learning efficiency;

  • reduces the computational complexity of the resulting model.

The rest of this paper is structured as follows: in Sect. 2 we describe the components of our model (Sect. 2.1 Database, Sect. 2.2 Preprocessing, Sect. 2.3 Feature Extraction, Sect. 2.4 Random Forest, Sect. 2.5 Feature Selection, Sect. 2.6 Post-processing). After the model presentation follows the fine-tuning and optimization of Random Forest parameters used in segmentation purposes, in Sect. 3. Finally, our experimental results are described and compared to other systems from the BRATS Challenge.

2 The Proposed Discriminative Model

The discriminative model proposed is similar to previously used models, but in this article we emphasize some aspects which make an important contribution to the performances reached. The performances of a segmentation model built on a discriminative function are mainly determined by three important issues: the quality of the annotated image-database, the classification algorithm applied and the feature set used.

Our model differs from the standard discriminative model by the feature selection step. In this step, the feature selection algorithm consists of the variable importance evaluation for the defined segmentation task. At the same time it allows to test new low-level features that should improve the segmentation performances or be more important than the existing features.

This paper is organized as follows: after a short introduction of the similar systems in the literature Sect. 2 describes the components of the proposed discriminative model used for brain tumor segmentation. After the model presentation follows the fine-tuning and optimization of Random Forest parameters used for segmentation purposes, presented in Sect. 3. Finally, our experimental results are described and compared to other systems from the BRATS Challenge.

2.1 Database

The most important image database for brain tumor segmentation was created during the BRATS Challenges (2012–2015), thanks to Menze and Jakab [13]. This database is expanded year after year with every challenge and has become a standard in the field. The BRATS 2015 dataset contains 220 HG and 54 LG brain images with gliomas and ensures sufficient diversity, which is a requirement for a well-performing database. All cases were acquired with similar protocols and contain four types of images: T1, T1c (with the contrast material Gadolinium), T2 and FLAIR. All images were skull stripped, resampled to 1 mm resolution in each direction and registered to the corresponding T1c image. The annotations were made by experts using an accurate protocol [10]. The annotations contain four different classes: edema, enhanced tumor, non-enhanced tumor and necrotic core.

The four classes defined by expert annotation are very hard to achieve by automatic segmentation. More realistic evaluation of segmentation results can be made by considering only three classes. It is considered that they are more representative in clinical practice. These classes are: Whole Tumor - WT (including all four tumor structures), Tumor Core - TC (including all tumor structures except for edema) and Active Tumor - AT (only the enhancing core). Of course, there are some differences between the annotations made by the same/different experts. These variations are specified in [13] and can be considered as the upper limit of the performance reachable. In this work we made the optimization considering only two classes: WT and TC (TC includes AT).

The principle of statistical pattern recognition is the assignment of some features to a well-delimited region or to every voxel. In this way the database used for statistical processing contains a large amount of instances and increases in size with each newly added feature. The database increases drastically with the increase in the number of cases. The training database, containing 274 cases, reaches a storage size of 21 GB in uncompressed format. The resulting database increases linearly with each feature added; thus, if one decides to use a set of about 1000 features, a total of 30 TB of information has to be processed. The more data can be processed and included in the training phase, the better the performance of the classifier obtained. In our work we tried to solve the issue of this huge, unmanageable dataset in two ways: (1) by reducing the irrelevant and redundant features; (2) by eliminating similar cases.

2.2 Preprocessing

In this work we have dealt with three important artifacts: inhomogeneity correction, noise filtering, and intensity standardization.

The correction of MRI inhomogeneity can be done by using the intensity information and some apriori knowledge of the anatomical tissue-structure only. In our previous work [11] we evaluated three inhomogeneity reduction methods. The best-performing and most accepted algorithm is N4 filtering [15]. For inhomogeneity reduction in MR images, we have applied the N4 filter implemented in the ITK package [2].

The most difficult task is to evaluate noise types and their levels in real images and to find the most suitable denoising method. Since we could not find a generally available method required by discriminative segmentation, we decided to use anisotropic diffusion filtering proposed in [6]. Its implementation can be found in the ITK [2] software package.

It is desirable to have the same intensity value for a given tissue in distinct images, regardless of the acquisition equipment and moment. The solution to this problem is to transform the histogram in order to match it to a predetermined shape. We extracted the quartiles points on the histogram obtained and performed linear transformations in such way that the first and third quartiles have predefined values.

In preprocessing, we filtered these three artifacts in the following order: bias field correction (ITK - N4 filtering), then noise filtering (ITK - anisotropic diffusion filtering), and finally, the proposed intensity standardization.

2.3 Feature Extraction

Image processing offers many procedures for the extraction of characteristics from images. In the field of tumor segmentation there are many studies that try to find certain characteristics with a high correlation to the brain tumor appearance in MR images. Despite these research efforts, no proper feature sets have been found yet. That is the reason for using a large feature set, with the features having little correlation to the goal of classification. In our approach we started with defining a large feature set, this is later reduced in order to eliminate the irrelevant or noisy features. For each feature, we defined many low-level characteristics that describe the intensities in the neighborhood (surrounding volume having a radius of 2–9 pixels) of the studied voxels. We have used the following features: first order operators (mean, standard deviation, max, min, median, gradient); higher order operators (Laplacian, difference of Gaussian, entropy, curvatures, kurtosis, skewness); texture features (Gabor filter); spatial context features (symmetry, projections, neighborhoods). By extracting all of these features for every voxel in all modalities, we transform the image segmentation task into a statistical pattern recognition problem. In order to deal with big amount of features it is necessary to reduce the number of used attributes. The appropriate selection of the attributes has to be done according to the goal of classification. First, we extracted 240 image features of each modality and we obtained a feature vector with 960 elements. All these features are defined in Weka Segmentation plugin form Fiji package [1].

2.4 Random Forest

RF is a powerful algorithm for segmentation purposes [5]. The RF has five important characteristics that make it applicable for segmentation tasks: manages large databases easily; handles thousands of variables; estimates the variable importance used in classification; is able to balance the error in unbalanced datasets; produces an internal unbiased estimator of generalized error.

The RF classifier is an ensemble of binary trees built on two random processes: the randomly built bootstrap set and the random feature selection in each node [5]. The creation of trees from the RF is based on two sets: the bootstrap set, containing the instances for building a tree and the OOB (out-of-bag) set, containing test instances not included in the bootstrap set. The bootstrap set is made up of the training instances by randomly sampling the training set with replacement. Each tree is trained on its own bootstrap set and evaluated on its OOB set. The maximization of the information gain is the splitting criterion applied in every node. In order to evaluate the information gain, the RF uses only a small number of variables (\(m_{tries}\)) out of all existing variables (M). These \(m_{tries}\) variables are chosen randomly and the splitting criterion is maximized only with these variables.

While the classifier is being built, the RF algorithm evaluates the so called OOB error. This error is the mean value of the classification error of each tree on its own OOB sets. The OOB error is an unbiased estimator of generalized error (GE) of the classification obtained. The relation of GE was proved by Breiman [5]:
$$\begin{aligned} GE=\rho \left( \frac{1}{s^2}-1\right) \end{aligned}$$
(1)
where \(\rho \) is the mean value of correlation and s stands for the strength of the ensemble. The minimum of GE can be reached by decreasing the correlation between trees and by increasing the classification strength of ensemble. These two conflicting trends determine the goal of the RF parameter optimization. Determining the appropriate values of these parameters could be the objective of an experimental optimization, described in Sect. 2.6.

2.5 Feature Selection

The main part of the model is the evaluation and selection of low-level image features for the segmentation task. In the field of image segmentation, discriminative classifiers are based on several local image features. A more reliable model can be built by using a framework that selects the variable importance from the point of view of classification. In the field of statistical pattern recognition, the selection of such features is a challenging task. In order to create a well-working discriminative model, we have to select the relevant features for our application and eliminate the irrelevant ones. For this purpose we used the variable importance evaluation provided by RF. In the construction of RF classifiers there are two possibilities to evaluate variable importance: Gini importance and permuted importance [5]. In our algorithm we used only the permuted variable importance, or mean in accuracy.

Because the variable importance values depend on the forest structure, the values obtained are equivocal. The importance values obtained differ in each round; the order of importance differs only slightly. In order to increase the relevance of permuted importance; we have to evaluate it for each variable several times to determine an average importance value. The ranking of average values is relevant.

One main objective of variable selection is to find a small number of variables appropriate for a good prediction. For this task we distinguish following steps:
  1. 1.

    Tune RF for variable importance evaluation

     
  2. 2.

    Evaluate variable importance order

     
  3. 3.

    Eliminate the least important variables

     
  4. 4.

    Tune RF for classification

     
  5. 5.

    Evaluate classification performances

     
  6. 6.

    Accept or reject the variable reduction.

     

In step 3 we have to deal with many instances, each consisting of a large number of features. For this purpose we created our feature selection algorithm, presented in detail in [12]. The main idea of the algorithm is to evaluate the variable importance several times on a randomly chosen part of the training set. It eliminates the least important 20%–50% of variables in each run, ensuring that the average OOB error does not exceed the desired limit. In our experiment, after each reduction step, the RF classifiers were trained and evaluated in order to determine the segmentation performances. Further, the elimination of variables depends on the decrease of the segmentation performances. The proportion of reduction is empirical; it depends on the number of attributes used and the performances obtained. In the first step we are able to exclude a large number of attributes and in the last steps only few. It must be in correlation with the reachable performances.

We applied the proposed feature selection algorithm in order to select an important feature set for our brain tumor segmentation task. The feature vectors of a single 3D brain image require 10 GB of memory. In order to provide a good enough training set, we had to use the information from at least 50 brain volumes. Thus, the whole training database (\(TDB=I\times F\)) is about 500 GB in size, which is practically unmanageable. There are two ways to reduce this size: reducing the number of instances (I) and reducing the number of features (F). The number of instances can be reduced by random subsampling the database. In each image belonging to one brain there are about 1.5 million voxels from which less than 10% are tumor voxels. Thus, in the first step, we drastically reduced the number of instances belonging to the brain-class in order to balance the dataset. A random subsampling reduced the number of healthy voxels in 10 : 1 ratio in the training set. The TDB size is reduced to 100 GB. The STDB (sampled training database) size becomes 5 GB by randomly subsampling the TDB 20:1. This size is still too large to be managed by our system (Intel(R) Core(TM) i7-2600kCPU@3.40 GHz, 16 GB RAM). The training time of the RF algorithm for a dataset of about 200 MB is 30 min, and it increases exponentially with the increase of the training database. We used the RF implementation from R package provided by CRAN [3]. In order to manage such a large amount of memory, we had to reduce the number of features also. Our algorithm was created to manage this big database and to select a set of adequate features for the given segmentation task. We applied our algorithm several times by evaluating the overall OOB error (Table 1). In order to determine the optimal set of attributes (Mnumber of all attributes) used, we tested the performances of the classifier obtained on UTI (unseen test images of 20 brain image sets). In our experiments we analyzed in significative parameter intervals the behaviour of OOB error and the Dice index.
Table 1.

The effect of parameter M on the classification performance

M - attributes

960

480

240

120

80

60

45

30

20

OOB error

0.0501

0.0508

0.051

0.0522

0.0532

0.0551

0.058

0.0635

0.0725

M - attributes

240

120

80

60

45

30

DICE-WT

0.868

0.866

0.848

0.843

0.838

0.828

DICE-TC

0.865

0.865

0.868

0.860

0.849

0.806

The Dice coefficients obtained in segmentation are presented in Table 1. We started the evaluation of Dice coefficient at only \(M=240\), because the training and testing of more than 240 attributes were time consuming. Also we observed that OOB error does not change significantly for \(M\in [240,960]\). Because there is a relation between OOB error and Dice coefficient, we consider that the Dice coefficient, similarly, does not change in the given interval. The significant decrease of the Dice coefficient is at \(M\in [80,120]\) attributes and differ for the analyzed classes: \(M=120\) for WT and \(M=80\) for TC (Fig. 1). We obtained the same interval for both TDB and UTI sets. According to the results obtained, we have chosen \(M=120\) attributes for the final classifier.
Fig. 1.

OOB error and Dice coefficient against the parameter M

2.6 Post-processing

In post-processing we took into consideration that the tumor is one single connected volume and the spatial relation between the different tumor classes is \(AT\subset TC \subset WT\). In this way, we were able to eliminate many small volumes that had been interpreted as false detections.

3 RF Optimization

The main task is to find some correlation between the OOB error and the Dice coefficient of segmentation. To achieve this task we had to optimize the RF parameters in order to obtain an efficient classifier for the segmentation task. There are three parameters that have to be tuned in RF optimization: the number of trees \(K_{trees}\), the number of features \(m_{tries}\) and the number of nodes \(T_{nodes}\). The minimum of generalized error (1) can be reached by decreasing the correlation between trees and by increasing the strength of the ensemble.

A binary decision tree is one of the strongest classifiers described in the literature. The overfit provided by the trees can be decreased by increasing the number of trees \(K_{trees}\) used in the forest. The rise in the number of \(K_{trees}\) will cause the GE to decrease and overfitting can be avoided. The GE has a minimum limit. Thus, further increasing the \(K_{trees}\) value, after the stabilization limit, GE remains unchanged; it is only the processing time that becomes longer without any gain in error. We analyzed the OOB error and Dice index versus the variation of \(K_{trees}\). The OOB value decreases with the increase of \(K_{trees}\) and reaches its minimum limit at 300 trees for \(M=80\) and 400 trees for \(M=120\) (Table 2 and Fig. 2).
Table 2.

The effect of number of trees \(K_{trees}\) on the classification performance

\( K_{trees} \)

20

30

40

50

100

200

300

400

500

OOB error M = 120

0.0651

0.0605

0.0581

0.0568

0.0544

0.0534

0.0533

0.0527

0.0528

OOB error M = 80

0.0655

0.0607

0.0586

0.0575

0.0549

0.0537

0.0531

0.0531

0.0531

\( K_{trees} \)

50

100

200

300

DICE-WT

0.864

0.866

0.867

0.867

DICE-TC

0.894

0.898

0.898

0.899

Fig. 2.

OOB error and Dice coefficient against the parameter \(K_{trees}\)

The Dice coefficient reaches the maximum ealier at 200 for WT and 100 for TC, thus a total of 100 trees are adequate to be used in the ensemble. Thus, our final classifier is built with a total of 100 trees.

The second analyzed parameter \(m_{tries}\) (the number of randomly chosen variables) used for splitting in nodes determines the uncorrelation between trees. Increasing the number \(m_{tries}\) the trees are more and more correlated, and the opposite is also true: by decreasing \(m_{tries}\), the correlation between trees decreases. When \(m_{tries}=M\) (Mthe total number of variables), the RF classifier turns into bagging. When \(m_{tries}=1\), the trees are highly uncorrelated, but at the same time, they lose their strength of classification. It is recommended [9] to perform a coarse evaluation in the interval \(\left( \sqrt{M/2},2\sqrt{M}\right) \), in order to choose the adequate value for \(m_{tries}\). In Table 3 we can observe that the OOB error reaches its minimum at about \(2\sqrt{M}\), and further increasing its value is useless (\(m_{tries}=25\) for \(M=120\), \(m_{tries}=15\) for \(M=80\)). More interesting results can be obtained by analyzing the Dice coefficient versus \(m_{tries}\). The Dice coefficient reaches the maximum at \(m_{tries}=15\) for WT and \(m_{tries}=11\) for TC in the analyzed domain (Table 3 and Fig. 3).
Table 3.

The effect of \(m_{tries}\) on the classification performance

\( m_{tries} \)

5

7

9

11

15

19

25

31

43

59

OOB error M = 120

5.67

5.54

5.44

5.39

5.34

5.27

5.25

5.22

5.22

5.22

OOB error M = 80

5.65

5.54

5.49

5.44

5.33

5.33

5.33

5.33

5.33

5.33

\( m_{tries} \)

9

11

15

19

25

DICE-WT

0.850

0.850

0.852

0.849

0.846

DICE-TC

0.869

0.872

0.864

0.871

0.867

Fig. 3.

OOB error and Dice coefficient against the parameter \(m_{tries}\)

The third parameter is the number of maximal nodes of each tree \(T_{nodes}\). Theoretically, there is no condition of limitation of the tree size (\(T_{nodes}\)), there is no generally accepted pruning condition.

In these circumstances each tree grows until every terminal node becomes a pure leaf, where no more splits are possible. This case can produce very large trees which occupy unnecessary memory space and lengthen processing time. Reducing the size of a tree \(T_{nodes}\) induces new diversity in the ensemble, decreases the correlation between trees and produces a smaller and more efficient classifier. The OOB error versus the \(T_{nodes}\) is given in Table 4 and Fig. 4.

We can see that the OOB error constantly decreases with the tree size and reaches its minimum for the unpruned trees. Important from the point of view of segmentation is the evolution of the Dice coefficient, it does not increase significantly for the last values. The optimal value can be determined at 2048 nodes, which is approximately 1 / 4 of the size of an unpruned tree.
Table 4.

The effect of tree-size \(T_{nodes}\) on the classification performance

\({T_{nodes}}\)

64

128

256

512

1024

2048

4096

MAX

OOBerr. M = 120

0.1246

0.1104

0.0976

0.0871

0.0768

0.0686

0.06

0.0522

OOBerr. M = 120

0.1253

0.1124

0.0986

0.0875

0.0773

0.0695

0.0612

0.0532

\({T_{nodes}}\)

512

1024

2048

4096

MAX

DICE-WT

0.817

0.841

0.864

0.866

0.867

DICE-TC

0.798

0.823

0.874

0.877

0.879

Fig. 4.

OOB error and Dice coefficient against the parameter \(T_{nodes}\)

4 Results

The final classifier was trained on STDB which consists of \(500000\, (<5\%)\) randomly sampled instances from a total of 10 million instances of the TDB, being 50 sets of HG brain images (chosen from BRATS 2015 training set). Our optimized classifier is composed of \(K_{trees}=100\) trees, each having a size of \(T_{nodes}=2048\) nodes. The splitting criterion is evaluated with \(m_{tries}=9\) randomly chosen features from the whole \(M=120\) feature set. The classification result obtained on BRATS 2015 test set are given in Table 5 and some segmentation examples in Fig. 5. The results obtained are comparable with previously reported results described in [13].
Table 5.

Compared Dice indexes

Classes

Our classifier

BRATS 2012 [13]

BRATS 2013 [13]

WT

75–91 [%]

63–78 [%]

71–87 [%]

TC

71–82 [%]

24–37 [%]

66–78 [%]

Fig. 5.

Segmentation results (contour line expert annotation, gray levels segmented tumor tissues)

5 Conclusion

We are working on further optimization in order to improve segmentation performances. Analyzing the results obtained on unseen test images we can notice highly accurate. In order to increase the classification performances we have to build an optimized training set containing all appearances of tumor forms (272 images) from BRATS 2015.

This can be done by adding new instances to the training set and also measuring their proximity to the existing set. Furthermore, the segmentation framework obtained will be enlarged by new low-level features. The importance of new features could be evaluated and compared to the current set of features.

In this manner, we are able to create a better feature set for tumor segmentation. Besides, additional post-processing may lead to further improvement in segmentation performances.

References

  1. 1.
    Fiji Is Just ImageJ. http://fiji.sc/Fiji
  2. 2.
  3. 3.
    The R project for statistical computing. https://www.r-project.org/
  4. 4.
    The SICAS Medical Image Repository. https://www.smir.ch/BRATS/Start2015
  5. 5.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefzbMATHGoogle Scholar
  6. 6.
    Diaz, I., Boulanger, P., Greiner, R., Murtha, A.: A critical review of the effects of de-noising algorithms on MRI brain tumor segmentation. In: 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC, pp. 3934–3937. IEEE (2011)Google Scholar
  7. 7.
    Geremia, E., Menze, B.H., Ayache, N.: Spatial decision forests for glioma segmentation in multi-channel MR images. In: MICCAI-BRATS Challenge on Multimodal Brain Tumor Segmentation (2012)Google Scholar
  8. 8.
    Goetz, M., Weber, C., Bloecher, J., Stieltjes, B., Meinzer, H.P., Maier-Hein, M.: Extremely randomized trees based brain tumor segmentation. In: MICCAI-BRATS Challenge on Multimodal Brain Tumor Segmentation (2014)Google Scholar
  9. 9.
    Goldstein, B.A., Polley, E.C., Briggs, F.: Random forests for genetic association studies. Stat. Appl. Genet. Mol. Biol. 10(1), 1–34 (2011)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Jakab, A.: Segmenting Brain Tumors with the Slicer 3D Software Manual for providing expert segmentations for the BRATS Tumor Segmentation Challenge. University of Debrecen/ETH ZürichGoogle Scholar
  11. 11.
    Lefkovits, L., Lefkovits, S., Vaida, M.F.: An Atlas based performance evaluation of inhomogeneity correcting effects. In: The 5th International Conference MACRO, Tîrgu-Mureş, pp. 79–90, March 2015Google Scholar
  12. 12.
    Lefkovits, L., Lefkovits, S., Vaida, M.F.: Brain Tumor Segmentation Based on Random Forest. Memoirs of the Scientific Sections of the Romanian Academy, pp. 83–93 (2016)Google Scholar
  13. 13.
    Menze, B.H., Jakab, A., Bauer, S., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2015)CrossRefGoogle Scholar
  14. 14.
    Reza, S., Iftekharuddin, K.M.: Multi-class abnormal brain tissue segmentation using texture features. In: MICCAI-BRATS Challenge on Multimodal Brain Tumor Segmentation (2013)Google Scholar
  15. 15.
    Tustison, N., Avants, B., Cook, P., Zheng, Y., Egan, A., Yushkevich, P., Gee, J.: N4ITK: improved N3 bias correction. IEEE Trans. Med. Imaging 29(6), 1310–1320 (2010)CrossRefGoogle Scholar
  16. 16.
    Zikic, D., Glocker, B., Konukoglu, E., Shotton, J., Criminisi, A., Ye, D., Demiralp, C., Thomas, O., Das, T., Jena, R., et al.: Context-sensitive classification forests for segmentation of brain tumor tissues. In: MICCAI-BRATS (2012)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • László Lefkovits
    • 1
    Email author
  • Szidónia Lefkovits
    • 2
  • László Szilágyi
    • 1
    • 3
  1. 1.Department of Electrical EngineeringSapientia University Tîrgu-MureşCoruncaRomania
  2. 2.Department of Computer Science“Petru Maior” University Tîrgu-MureşTârgu MureşRomania
  3. 3.Department of Control Engineering and Information TechnologyUniversity of Technology and Economics BudapestBudapestHungary

Personalised recommendations