Background

Characterizing and quantifying collective migration phenotypes of a monolayer of cells in vitro is an important step in understanding physiological processes such as development, wound repair and cancer motility. The prevalent approach is to acquire still or time-lapse images using bright field microscopy, followed by manual or automated extraction of quantitative measures of cellular morphology or dynamics (e.g., [1-3]).

The vast numbers of microscopic images acquired in high throughput studies preclude manual annotation and hence automatic computational tools become indispensable. Indeed, several tools to tackle these tasks were recently reported; some exploit local motion-estimation to quantify dynamic intercellular phenomena [4, 5], whereas others are designed to quantify only global motion of complete colonies or confluent monolayers [6-15]. The basic common computational step in all approaches is segmentation of an image into cellular and non-cellular regions, the accuracy of which is crucial for further analysis. It is inherently a foreground-background segmentation task: no explicit cell segmentation is performed; each pixel is rather assigned a binary label as being part of either a cellular or a non-cellular region.

The high variability in imaging conditions and cells’ appearance requires robust algorithms that can deal with this imaging diversity automatically, accurately and preferably without the need for parameter-tuning. It is difficult to systematically select the most appropriate segmentation tool from the available options [16, 17]. Proposed methods are usually evaluated on in-house benchmarks that are not freely available to the public. These evaluations often compare accuracy to human-annotations and rarely to alternative computational methods, hence are not subjected to a thorough comparative assessment of extant methods [18].

We therefore propose a uniform framework to benchmark algorithms for multi-cellular segmentation in bright field microscopy images.

Construction and content

A set of 171 manually segmented images of 5 different cell lines at diverse confluence levels, acquired in several laboratories under different imaging conditions, were partitioned into 8 datasets as follows (example images are presented in Figure 1, detailed description of the cells and imaging conditions can be found on the benchmark website):

  • TScratch: 24 bright field images of confluent cells available at the TScratch site, http://www.cse-lab.ethz.ch/index.php?&option=com_content&view=article&id=363[6];

  • Melanoma: 20 bright field images of confluent populations of brain metastatic melanoma cells acquired during a wound healing experiment [19];

  • Init: 28 differential interference contrast (DIC) images of confluent DA3 cells, derived from the mouse mammary adenocarcinoma line D1-DMBA-3, acquired during wound healing experiments;

  • SN15: 54 DIC images of confluent DA3 cells acquired during a multi-well wound healing experiment;

  • Scatter: 6 DIC images of Madin-Darby Canine Kidney (MDCK) epithelial cells acquired during a multi-well scatter experiment (unpublished data);

  • Microfluidics: 13 DIC images of MDCK cells grown in a microfluidic plate acquired during a scatter assay experiment with a Hepatocyte growth factor/scatter factor gradient (unpublished data);

  • HEK293: 12 DIC images of confluent HEK293T cells acquired during a multi-well wound healing experiment (unpublished data);

  • MDCK: 14 DIC images of confluent MDCK cells acquired in a multi-well wound healing experiment (unpublished data).

Figure 1
figure 1

Examples of images from the presented benchmark and their corresponding manual segmentations.

Four out of eight datasets (“TScratch”, “Melanoma”, “Init”, “SN15”) were previously reported before in our study comparing TScratch to MultiCellSeg [15]. The other four are published here for the first time. “TScratch” is the only dataset that was already publically available.

Two freely available tools (TScratch, [6]; MultiCellSeg, [15]) and one implementation of a designated algorithm [10] were evaluated on these datasets; a brief description of these algorithms is found in the Additional file 1: Supporting Text. Each of the algorithms was evaluated using the same set of parameters on all datasets to assess robustness. All raw images, manual segmentations, algorithm segmentation results, performance measurements and an evaluation Matlab script are freely available at the Broad Bioimage Benchmark Collection (BBBC, http://www.broadinstitute.org/bbbc/) [20].

The algorithms were ranked by mean F-measure (i.e., the harmonic mean of precision and recall) of the pixel labeled across all images in each dataset 1 n i = 1 n fmeasure Imag e i , where n is the number of images in the given dataset). F-measure is the evaluation measure used for foreground-background datasets in the BBBC. Evaluation results are found in Table 1, which also present the median F-measure as a more robust (less sensitive) evaluation. Additional file 2: Table S1 displays the average precision and recall measures, Additional file 3: Figure S1 plots the performance on each of the images in each dataset.

Table 1 Evaluation of the three designated tools on the eight available datasets

To assess the baseline variance that should be expected when scoring the results of an algorithm, a second expert annotated an arbitrary partial set of the images (64 images from all datasets, excluding the “Scatter” dataset). The two annotations were evaluated using mean and median F-measures compared with the primary annotated ground truth. The results are reported in Additional file 4: Table S2, and in the Additional file 1: Supporting Text. The annotators were instructed to pay special attention to small isolated cells and voids between groups of cells; this is important for some applications and can be later excluded via post-processing if not relevant. Most inconsistencies found were in defining the exact border contours of the cells, as described in the Additional file 1: Supporting Text and in Additional file 5: Figure S2.

Twenty arbitrary images were selected as a training set for algorithms that apply supervised learning [15], or for adjusting parameters’ values. Use of different arbitrarily selected training images did not significantly change the algorithms’ performance. Comprehensive assessment of [10] is presented in Additional file 6: Table S3 and discussed in the Additional file 1: Supporting Text; Table 1 contains the optimal results achieved considering a single set of parameters for all datasets.

Utility

The benchmark includes two directories:

  •  train: 20 images (images directory) and the corresponding manual annotations (manual directory). These images can be used as a training set for algorithms that apply supervised learning or for adjusting parameters’ values.

  •  datasets: 8 different datasets, each consists of images (images directory), ground truth manual annotations (manual directory), results masks of the 3 algorithms we compare (tscratch, multiCellSeg, topman directories), a measures file (measures.mat) containing evaluation summary, and a second annotation for a partial set of the images.

    • ∘ A documented Matlab script (bbbcCalcMeasures.m) that compares the different algorithms (and can easily be adjusted for comparing new algorithms).

Benchmark structure is described in the README file in the main directory.

Discussion

The wound healing assay (aka scratch assay), the traditional method used to study collective cell motility and migration [21, 22] in the life sciences, is performed by inducing a sudden injury created by removal of a sheet of cells from a confluent monolayer [19]. This assay can be performed using multi-well plates, with up to 384-wells [23], providing a large amount of data for high-quality quantitative analysis. The scratch is imaged and measured periodically during the healing process, and rate of change in the wound area is recorded and can be compared with other cell lines, environmental conditions or chemical treatments. Quantifying wound healing assays is a natural application of multi-cellular segmentation algorithms. The availability of a benchmark to evaluate algorithms on a variety of cell lines and imaging conditions will enable educated algorithm selection. The general segmentation of cell clusters in bright-field images has additional applications (e.g., quantifying scatter assays [15]), thus emphasizing the importance of evaluating the segmentation of non-confluent cells images. The dataset provided is diverse in terms of cell lines, image acquisition parameters, cellular confluence levels, and was collected from several laboratories, and can thus address the need for public access to image repositories [24] as well as the general concern regarding poor algorithmic comparisons [17, 18, 25, 26].

Conclusions

A variety of software tools and imaging apparatuses exist to enable high throughput multi-cellular segmentation in bright field images. This is the first and currently only freely available public annotated dataset for evaluations. We encourage scientists to evaluate new algorithms and to contribute additional annotated datasets to this benchmark.

Availability and requirements

All raw images, manual segmentations and evaluations are freely available at the Broad Bioimage Benchmark Collection (BBBC), http://www.broadinstitute.org/bbbc/BBBC019/.