Effective automated pipeline for 3D reconstruction of synapses based on deep learning
Abstract
Background
The locations and shapes of synapses are important in reconstructing connectomes and analyzing synaptic plasticity. However, current synapse detection and segmentation methods are still not adequate for accurately acquiring the synaptic connectivity, and they cannot effectively alleviate the burden of synapse validation.
Results
We propose a fully automated method that relies on deep learning to realize the 3D reconstruction of synapses in electron microscopy (EM) images. The proposed method consists of three main parts: (1) training and employing the faster region convolutional neural networks (RCNN) algorithm to detect synapses, (2) using the zcontinuity of synapses to reduce false positives, and (3) combining the Dijkstra algorithm with the GrabCut algorithm to obtain the segmentation of synaptic clefts. Experimental results were validated by manual tracking, and the effectiveness of our proposed method was demonstrated. The experimental results in anisotropic and isotropic EM volumes demonstrate the effectiveness of our algorithm, and the average precision of our detection (92.8% in anisotropy, 93.5% in isotropy) and segmentation (88.6% in anisotropy, 93.0% in isotropy) suggests that our method achieves stateoftheart results.
Conclusions
Our fully automated approach contributes to the development of neuroscience, providing neurologists with a rapid approach for obtaining rich synaptic statistics.
Keywords
Electron microscope Synapse detection Deep learning Synapse segmentation 3D Reconstruction of synapsesAbbreviations
 ATUMSEM
Automated tapecollecting ultramicrotome scanning electron microscopy
 AP
Average precision
 CAS
Chinese Academy of Sciences
 DNNs
Deep neural networks
 EM
electron microscopy
 EPFL
É
 cole Polytechnique Fédérale de Lausanne FCN
Fully convolutional network
 FIBSEM
Focused ion beam scanning electron microscopy
 GMM
Gaussian mixture model
 MLS
Moving least squares
 MSER
Maximally stable extremal region
 RCNN
Region convolutional neural networks
 RPN
Region proposal network
 SBEM
Serial blockface scanning electron microscopy
 SIFT
Scaleinvariant feature transform
 ssTEM
Serial section transmission electron microscopy
 SVM
Support vector machine
 TMI
Transactions on medical imaging
Background
A synapse is a structure that permits a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron, and it has an important responsibility in the neural system. If we consider the brain network to be a map of connections, then neurons and synapses can be considered as the dots and lines, respectively, and it can be hypothesized that the synapse is one of the key factors for researching connectomes [1, 2, 3]. In addition, synaptic plasticity is associated with learning and memory. Sensory experience, motor learning and aging are found to induce alterations in presynaptic axon boutons and postsynaptic dendritic spines [4, 5, 6]. Consequently, understanding the mechanism of synaptic plasticity will be conducive to the prevention and treatment of brain diseases. To study the correlation between synaptic growth and plasticity and to reconstruct neuronal connections, it is necessary to obtain the number, location and structure of synapses in neurons.
According to the classification of synaptic nerve impulses, there are two types of synapses: chemical synapses and electrical synapses. In this study, we focus on the chemical synapse, which consists of presynaptic (axonal) membrane, postsynaptic (dendritic) membrane and a 3060 nm synaptic cleft. Because of its limited resolution, optical microscopy cannot provide sufficient resolution to reveal these fine structures. Fortunately, it is now possible to more closely examine the synapse structure due to the rapid development of electron microscopy (EM). In particular, focused ion beam scanning electron microscopy (FIBSEM) [7] can provide nearly 5nm imaging resolution, which is conducive to obtaining the very fine details of ultrastructural objects; however, this technique is either limited to a small section size (0.1 mm × 0.1 mm) or provides blurred imaging.
By contrast, automated tapecollecting ultramicrotome scanning electron microscopy (ATUMSEM) [8] offers anisotropic voxels with a lower imaging resolution in the z direction (2 nm × 2 nm × 50 nm), but it is capable of working with largearea sections (2.5 mm ×6 mm). Moreover, ATUMSEM does not damage any sections; thus, the preserved sections can be imaged and analyzed many times. Considering volume and resolution, this paper employs ATUMSEM and FIBSEM image stacks to verify the validity and feasibility of our algorithms.
Note that EM images with higher resolution will inevitably produce more data in the same volume; thus, synapse validation requires a vast amount of laborious and repetitive manual work. Consequently, an automated synapse reconstruction pipeline is essential for analyzing large volumes of brain tissue [9]. Prior works on synapse detection and segmentation investigated a range of approaches. Mishchenko et al. [10] developed a synaptic cleft recognition algorithm to detect postsynaptic densities in serial blockface scanning electron microscopy (SBEM) [11] image stacks. However, this method was effective for synapse detection only if the prior neuron segmentation was satisfactory. Navlakha et al. [12] presented an original experimental technique for selectively staining synapses, and then they utilized a semisupervised method to train classifiers such as support vector machine (SVM), AdaBoost and random forest to identify synapses. Similarly, Jagadeesh et al. [13] presented a new method for synapse detection and localization. This method first characterized synaptic junctions as ribbons, vesicles and clefts, and then it utilized maximally stable extremal region (MSER) to design a detector to locate synapses. However, all these works [10, 12, 13] ignored the contextual information of synapses.
For the above reasons, Kreshuk et al. [14] presented a contextual approach for automated synapse detection and segmentation in FIBSEM image stacks. This approach adopted 35 appearance features, such as magnitude of Gaussian gradient, Laplacian of Gaussian, Hessian matrix and structure tensor, and then it employed a random forest classifier to produce synapse probability maps. Nevertheless, this approach neglected the asymmetric information produced by the presynaptic and postsynaptic regions, which led to some inaccurate results. Becker et al. [15] utilized contextual information and different Gaussian kernel functions to calculate synaptic characteristics, and then they employed these features to train an AdaBoost classifier to obtain synaptic clefts in FIBSEM image stacks. Similarly, Kreshuk et al. [16] proposed an automated approach for synapse segmentation in serial section transmission electron microscopy (ssTEM) [17] image stacks. The main idea was to classify synapses from 3D features and then segment synapses by using the Ising model and objectlevel features classifier. Ref. [16] did not require prior segmentation and achieved a good error rate. Sun et al. [18] focused on synapse reconstruction in anisotropic image stacks, which were acquired through ATUMSEM; detected synapses with cascade AdaBoost; and then utilized continuity to delete false positives. Subsequently, the variational region growing [19] was adopted to segment synaptic clefts. However, the detection accuracies of Ref. [16] and Ref. [18] were not satisfactory, and the segmentation results lacked smoothness.
Deep neural networks (DNNs) have recently been widely applied in solving medical imaging detection and segmentation problems [20, 21, 22, 23] due to their extraordinary performance. Thus, the application of DNNs to synapse detection in EM data holds great promise. Roncal et al. [24] proposed a deep learning classifier (VESICLECNN) to segment synapses directly from EM data without any prior knowledge of the synapse. Staffler et al. [25] presented SynEM, which focused on classifying borders between neuronal processes as synaptic or nonsynaptic and relied on prior neuron segmentation. Dorkenwald et al. [26] developed the SyConn framework, which used deep learning networks and random forest classifiers to obtain the connectivity of synapses.
Method
Additional file 1: Video S1. 3D reconstruction result in Fig. 2. (MOV 946 kb)
The proposed image registration method for serial sections of biological tissue is divided into three parts: searching for correspondences between adjacent section, displacement calculations for the identified correspondences, and warping the image tiles based on the new position of these correspondences. For the correspondences searching, we adopted SIFTflow algorithm [31], to search for correspondences between adjacent sections by extracting equally distributed grid points on the wellaligned adjacent sections. For the displacement calculation, the positions of the identified correspondences were adjusted throughout all sections by minimizing a target energy function, which consisted of the data term, the small displacement term, and the smoothness term. The data term keeps pairs of correspondences at the same positions in the xy plane after displacement. The small displacement term constrains the correspondence displacements to minimize image deformation. The smoothness term constrains the displacement of the neighbor correspondences. For the image warping, we used the Moving Least Square (MLS) method [32] to warp each section with the obtained positions. The deformation results produced by MLS are globally smooth to retain the shape of biological specimens. The similar statement also can be seen from Ref. [33]. This image registration method not only reflects the discontinuity around wrinkle areas but also retains the smoothness in other regions, which provides a stable foundation for followup works.
Synapse detection with deep learning
The deep learning network was implemented using Caffe [35] deep learning library (The process of training the Faster RCNN is shown in [Appendix 2]). In training process, Faster RCNN was optimized by the stochastic gradient descend (SGD) algorithm with the following optimization hyperparameters: weight decay = 0.0005, momentum = 0.9, gamma = 0.1, learning rate = 0.0001 for numerical stability. The minibatch size and number of anchor locations were set to 128 and 2400, respectively. In addition to ZF [36] and VGG16 [37], we also applied ResNet50 [38] as shared FCN to train Faster RCNN. It took nearly 2028 hours to train the network for 80000 iterations on a GeForce Titan X GPU.
Given the detection results of small images, it is easy to gather all detections and obtain the final detection results of an original image. However, synapses distribute randomly in EM images, and it is possible that one synapse coexists in two adjacent small images. In this case, this method might lead to duplicate detections, which reduces the detection precision, as illustrated in Fig. 4b. Therefore, an effective detection boxes fusion method is proposed to solve this challenge. Through observations and analyses, we find that the distributions of synapses are sparse. Suppose that there are \(\mathcal {N}_{i}\) synapse detection boxes in the ith section, \(\mathcal {S}_{i,j}\) represents the jth synapse detection box in the ith section, and \(\left (c_{i,j}^{1},c_{i,j}^{2}\right)\) and \(\left (c_{i,j}^{3},c_{i,j}^{4}\right)\) are the upperleft coordinates and lowerright coordinates of \(\mathcal {S}_{i,j}\), respectively. If two synapse detection boxes are close enough or even overlapped, it can be concluded that these might be duplicate detections. A direct evaluation criterion for duplicate detections is the distance between synapses in the same section. The main procedure in the ith section is illustrated in Algorithm 1. In line 11 and 12 of Algorithm 1, \(\left (c_{i,j}^{^{\prime }1},c_{i,j}^{{^{\prime }2}}\right)\) and \(\left (c_{i,j}^{^{\prime }3},c_{i,j}^{^{\prime }4}\right)\) are the upperleft coordinates and lowerright coordinates of the updated \(\mathcal {{S^{\prime }}}_{i,j}\), respectively.
Screening method with zcontinuity
In contrast, false positives only appear in one or two layers. Therefore, we utilized zcontinuity to eliminate false positives. Specifically, if a synapse detection box appears L times or more in the same area of continuous 2L−1 layers, it can be considered as a real synapse; otherwise, it is regarded as a false positive. The clearcut principle is described in Algorithm 2.
In line 11 of Algorithm 2, [·] denotes the indicator function. When T meets T≥L, we can confirm that the object detected by Faster RCNN is a synapse with high probability; thus, this detection result \(\mathcal {S}_{i,j}\) with an index in the 3D view \(\mathcal {S^{\prime \prime }}_{n}(n= 1,2,\dots)\) will remain. Otherwise, it will be removed.
Synapse segmentation using GrabCut
First, we converted the original detection images into binary images using an adaptive threshold. On this basis, the erode and dilate operations were employed to eliminate noise and obtain synaptic clefts. After morphological processing, synaptic clefts can be approximately located. Since most shapes of synaptic clefts are similar to quadratic curves, suitable curves are proposed to fit the structure of the synaptic clefts and obtain more refined results. We randomly selected m pairs of points p_{i}=(x_{i},y_{i}),1<i≤m from the image after morphological processing, and m is defined as one third of the number of white points in the corresponding image, which is empirically based. Subsequently, we employed them to fit the quadratic curve y=ax^{2}+bx+c. Consequently, a series of synaptic clefts are observed as quadratic curves. Finally, we selected the starting point p_{1} and the ending point p_{n} from the two ends of each fitted curve, and then we calculated the shortest path [28] between p_{1} and p_{n}.
Note that the obtained shortest path is only a curve rather than a segmentation result, and sometimes the dilated results of fitted curve and shortest path cannot effectively fit the various synaptic clefts, as shown in the Fig. 6, an effective segmentation algorithm has to be introduced. Motivated by previous researches [29], we proposed to use GrabCut algorithm for fine segmentation.
In this work, p(·) denotes the Gaussian probability distribution, and π(·) represents the mixture weight.
GraphCut [41] is a onetime minimization, whereas GrabCut is an iterative minimization, and each iteration process makes the GMM parameters better for image segmentation. Initialize the trimap T={T_{S},T_{B},T_{U}} by selecting the rectangular box. The pixels outside the box belong to background T_{B}, whereas the pixels inside the box indicate “they might be synapses" and belong to T_{U}, and T_{S} implies synapse. To obtain a better result, users can draw a masking area inside the box with a synapse brush and a background brush, where the pixels in different masking areas are regarded as different classes. The detailed procedures for synapse segmentation using GrabCut are described in Algorithm 3.
Results and Discussion
Algorithm parameters and values
Parameter  Symbol  Value 

Fusion distance threshold  𝜗 _{1}  100 
𝜗 _{2}  50  
zcontinuity layers  L _{1}  3 
L _{2}  20  
zcontinuity distance threshold  υ _{1}  200 
υ _{2}  100  
GrabCut iterations  T _{ G}  10 
Gaussian mixture components  K  5 
We first present the datasets and evaluation methods. Subsequently, we adopt precisionrecall curves to evaluate the performances of our detection and segmentation methods. Then, average precision (AP), F1 score and Jaccard index are employed for further quantitative analyses. Finally, we present and analyze the reconstruction results of synapses.
Datasets and evaluation method
In this work, the specimens and ATUMSEM sections of mouse cortex were provided by the Institute of Neuroscience, Chinese Academy of Sciences (CAS). The physical sections were imaged using an SEM (Zeiss Supra55) with an imaging voxel resolution size of 2 nm ×2 nm ×50 nm and dwell time of 2 μs by the Institute of Automation, CAS. The dataset of rat hippocampus^{1} was acquired by Graham Knott and Marco Cantoni at École Polytechnique Fédérale de Lausanne (EPFL). It is made publicly available for accelerating neuroscience research, and the resolution of each voxel is approximately 5 nm ×5 nm ×5 nm.
Illustration of two datasets
Dataset  EM  Voxel size (nm^{3})  Train size  Test size 

(A) Cortex  ATUMSEM  2 ×2 ×50  8624×8416×30  7616×8576×178 
(B) Hippocampus  FIBSEM  5 ×5 ×5  1024×768×165  1024×768×165 
 Precision and recall. In this work, precision is the probability that detected synapses are correct, and recall is the probability that the true synapses are successfully detected.$$ precision = \frac{true\;positives}{true\;positives + false\;positives}, $$(4)$$ recall = \frac{true\;positives}{true\;positives + false\;negatives}. $$(5)
 Average precision. AP denotes the area under the precisionrecall curve, and it can be expressed as the following formula, where P represents precision and R indicates recall:$$ AP=\int_{0}^{1}P\left (R \right)dR. $$(6)
 F1 score. Since precision and recall are often contradictory, F1score is the weighted average of precision and recall, which shows the comprehensive performance of methods.$$ F1\ score = \frac{2\times P \times R}{P + R}. $$(7)

Jaccard index. This metric is also known as the VOC score [42], which calculates the pixelwise overlap between the ground truth (Y) and segmentation results (X).
Motivated by Ref. [43], we define that a detection or segmentation result is considered as a true positive only if the overlap between the region of the result and corresponding ground truth reaches at least 70%.
Furthermore, we use the dilated segmentation result (X^{1}) and dilated ground truth (Y^{1}) to calculate the precision and recall and to obtain the 1pixel overlap of AP. For 3pixel overlap and 5pixel overlap, the ground truth and segmentation result dilate three pixels and five pixels, respectively.
Detection Accuracy
In this subsection, we evaluate the detection performance of our approach and compare it with Refs. [15, 18, 23] on different datasets in terms of precision recall curves, AP and F1 measure.
Detection results of Faster RCNN based on different models
Dataset  EM  Size  Model  AP  Rate 

(A) Cortex  ATUMSEM  1000×1000  ZF  82.0%  9 fps 
VGG16  83.2%  3 fps  
ResNet50  84.1%  4 fps  
(B) Hippocampus  FIBSEM  500×500  ZF  86.8%  36 fps 
VGG16  87.4%  12 fps  
ResNet50  90.9%  16 fps 
Detection performance of different threshold L on the ATUMSEM dataset
Size of L  L=2  L=3  L=4  L=5  L=6  L=7 

AP  77.8  86.8  83.3  75.2  70.1  66.2 
For the ATUMSEM dataset, the performance significantly decreases since it ignores fusion and zcontinuity. For the FIBSEM dataset, the difference between synapses and other subcellular structures is significant due to its small area and simple scene. Thus, our approach achieves a higher average detection precision, and the promotion of the screening method is not enormous.
Quantitative detection performance for two EM datasets
Metrics  Sun [18]  UNet [23]  FRCNN  FRCNN +  FRCNN +  FRCNN + fusion + 
fusion  continuity  continuity  
ATUMSEM DATASET  
AP  67.6%  74.9%  84.1%  89.7%  86.8%  92.8% 
F1  64.8%  74.8%  79.3%  83.6%  83.9%  89.2% 
FIBSEM DATASET  
Metrics  Becker [15]  UNet [23]  FRCNN  FRCNN +  FRCNN +  FRCNN + fusion + 
fusion  continuity  continuity  
AP  90.6%  82.4%  90.9%  91.9%  92.4%  93.5% 
F1  88.4%  82.0%  88.5%  90.6%  89.2%  91.4% 
Segmentation accuracy
Segmentation performance for the ATUMSEM dataset
Size of pixel  Metrics  Sun [18]  UNet [23]  FRCNN+  FRCNN +  FRCNN + fusion + 

overlap  Growing [19]  GrabCut  continuity + GrabCut  
0pixel  AP  32.7%  55.4%  49.0%  57.6%  65.6% 
F1  36.3%  58.1%  53.4%  60.6%  65.2%  
Jaccard  30.6%  49.5%  43.2%  52.6%  58.1%  
3pixel  AP  56.3%  67.4%  67.8%  78.8%  86.1% 
F1  60.6%  66.9%  69.2%  75.8%  81.5%  
Jaccard  50.1%  59.0%  59.8%  69.0%  74.7%  
5pixel  AP  59.4%  70.6%  76.4%  80.3%  88.6% 
F1  62.7%  70.1%  74.4%  77.5%  84.1%  
Jaccard  53.7%  60.2%  63.5%  72.2%  76.9% 
Segmentation performance for the FIBSEM dataset
Size of pixel  Metrics  Becker [15]  UNet [23]  FRCNN +  FRCNN +  FRCNN + fusion + 

overlap  Growing [19]  GrabCut  continuity + GrabCut  
0pixel  AP  79.3%  70.3%  66.8%  79.4%  82.8% 
F1  78.2%  68.9%  69.4%  77.5%  82.9%  
Jaccard  61.5%  54.1%  53.5%  65.2%  70.9%  
3pixel  AP  88.0%  78.8%  74.0%  88.1%  91.6% 
F1  84.4%  77.2%  74.6%  85.4%  89.4%  
Jaccard  74.6%  60.9%  60.2%  75.7%  79.6%  
5pixel  AP  90.2%  80.6%  79.5%  90.8%  93.0% 
F1  86.6%  79.3%  78.9%  87.8%  91.1%  
Jaccard  80.3%  67.5%  67.7%  80.6%  83.7% 
3D visualization
Additional file 2: Video S2. 3D reconstruction of synapses on the ATUMSEM dataset. (MOV 13,499 kb)
Additional file 3: Video S3. 3D reconstruction of synapses on the FIBSEM dataset. (MOV 2275 kb)
Computational Efficiency
Discussion
As mentioned in this section, our approach outperforms the approaches of Refs. [15, 18, 19, 23] on several standard metrics. However, note that the results of Ref. [15] in this paper are lower than those reported in the TMI paper. There might be two reasons for this inaccuracy. The first is that the authors of Ref. [15] offer no ground truth, and they allow us to draw it by ourselves. The second is that our performance measurements are not similar to those in Ref. [15].
Since the synapse is a flat 3D structure and the screening method with zcontinuity have indeed reduced false positives in our work, which demonstrates the importance of 3D information in synapse detection problem. In addition, inspired by the promising results [45, 46], it can be speculated that the 3D network could effectively preserve and extract the 3D spatial information from volumetric data. Therefore, we believe that the extension of 2D RCNN to 3D one could help improve the detection accuracy, and we are planning to design 3D Faster RCNN network to detect synapses in EM data set.
Conclusion
In this paper, we propose an effective approach to reconstruct synapses with deep learning. Our strategy is to utilize Faster RCNN to detect the regions of synapses and then employ zcontinuity to reduce false positives. Subsequently, shortest path and GrabCut are employed to obtain the synaptic clefts. Finally, we utilize our approach for the 3D reconstruction of synapses in isotropic and anisotropic datasets. The experimental results demonstrate that our algorithm enhances the precision of detection and guarantees the accuracy of segmentation, which will promote efficiency in synapse validation and benefit connectomics and synaptic plasticity analysis. Furthermore, we apply our approach to neuroscience experiments. Our automated approach helps neurologists quickly identify the number of synapses and multisynapses in different experimental specimens, and further analyses reveal a correlation between spine formation and responses of fearconditioned animals [4].
Despite the promising segmentation results of our approach, the segmentation process is somewhat tedious, and the efficiency and accuracy of traditional segmentation algorithms can be increased. For this task, future work will focus on detection and segmentation using endtoend 3D deep neural networks, which will enhance both speed and accuracy for synapse reconstruction algorithm.
Appendix 1: The architecture of shared FCN layers and RPN
Faster RCNN mainly consists of two modules, the first module is RPN that generates region proposals, the second one is Fast RCNN [34] which classifies the region proposals into different categories. In practice, the architecture of RPN and Fast RCNN are fixed, ZF [36] and VGG16 [37] are applied as shared FCN to train the whole network. Here we introduce the shared FCN and RPN modules in details.
The architecture of the shared FCN layers (eg. ZF) and RPN is showed in Fig. 14. The structure above the dotted line is ZF network, and the structure below the dotted line is RPN network. Assuming the size of the input map is 224 ×224 ×3. The parameters of Conv1 layer are as follows: the kernel size is 7 ×7, the channels of feature maps is 96, the padding size and the stride size is 5 and 2, respectively. In ZF, each convolution layer is followed by batch normalization and rectified linear unit (RELU), and the size of output feature maps is 14 ×14 ×256. In RPN, it first takes a 3×3 convolutional kernel (sliding window) in the input feature maps. In the center of sliding window (illustrated in Fig. 15), there are k (k=9) anchor boxes with different scales (128, 256, 512) and aspect ratios (1 :1, 1 :2, 2 :1), which are used to solve the multiscale target tasks in detection. In what follows, cls layer and reg layer are employed for classification and border regression respectively. cls layer generates two elements to determine the probability of candidates, while the reg layer produces four coordinate elements (x,y,w,h) to identify the location of candidates. Finally, according to the probability of candidates, RPN selects the top 300 region proposals as the input of Fast RCNN for further classification.
Appendix 2: The process of training the Faster RCNN
Since RPN and Fast RCNN share the same feature extraction network, we utilized the alternating training method to train the network with features shared, the training process is described as follows:
Step 1: Initialize RPN network parameters by using ImageNet pretrained model, and then finetune RPN network.
Step 2: Utilize the region proposals generated by the step1 RPN to train Fast RCNN detection network. In this process, ImageNet pretraining model is also used to initialize Fast RCNN network parameters.
Step 3: Keep the shared FCN layers to be fixed. Subsequently, apply step2 Fast RCNN network and train data to reinitialize RPN, and only finetune the layers unique to RPN.
Step 4: Employ step3 RPN and train data to finetune Fast RCNN with the shared FCN layers fixed.
The above four steps enable RPN and Fast RCNN networks to share the convolutional layers and form a unified Faster RCNN network.
Footnotes
Notes
Acknowledgements
We would like to thank the editors and reviewers for their valuable comments and suggestions that helped improve the original version of this paper, thank Dr. Yu Kong, Yang Yang and Danqian Liu (Institute of Neuroscience, CAS) for sample preparation and sectioning. We would also like to thank Mr. Lixin Wei and colleagues (Institute of Automation, CAS) for use of the Zeiss Supra55 Scanning Electron Microscope and technical support.
Funding
Data collection received support from Scientific Instrument Developing Project of Chinese Academy of Sciences (No. YZ201671). Analysis and interpretation received support from National Science Foundation of China (No. 61201050, No. 11771130). Design and writing of the manuscript received support from Special Program of Beijing Municipal Science and Technology Commission (No. Z161100000216146).
Availability of data and materials
The data and source code in this paper is available upon request.
Authors’ contributions
CX conceived and designed the method, developed the algorithm and wrote the paper. WL, HD implemented the screening algorithm, conducted the experiments and wrote the paper. XC developed the image registration method. HH and QX conceived the method and gave the research direction and feedback on experiments and results. YY provided mouse cortex dataset and was instrumental in analysing the results. All authors contributed to, read and approved the final manuscript.
Ethics approval and consent to participate
Not applicable
Consent for publication
Not applicable
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.Briggman KL, Helmstaedter M, Denk W. Wiring specificity in the directionselectivity circuit of the retina. Nature. 2011; 471(7337):183–8.CrossRefPubMedGoogle Scholar
 2.Helmstaedter M, Briggman KL, Turaga SC, Jain V, Seung HS, Denk W. Corrigendum: Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature. 2013; 500(7461):168–74.CrossRefPubMedGoogle Scholar
 3.Kasthuri N, Narayanan KJ, Hayworth DR, et al. Saturated reconstruction of a volume of neocortex. Cell. 2015; 162(3):648–61.CrossRefPubMedGoogle Scholar
 4.Yang Y, Liu DQ, Huang W, Deng J, Sun Y, Zuo Y, Poo MM. Selective synaptic remodeling of amygdalocortical connections associated with fear memory. Nat Neurosci. 2016; 19(10):1348–55.CrossRefPubMedGoogle Scholar
 5.Hofer SB, MrsicFlogel TD, Bonhoeffer T, Hübener M. Experience leaves a lasting structural trace in cortical circuits. Nature. 2009; 457(7227):313–7.CrossRefPubMedGoogle Scholar
 6.Zuo Y, Yang G, Kwon E, Gan WB. Longterm sensory deprivation prevents dendritic spine loss in primary somatosensory cortex. Nature. 2005; 436(7048):261–5.CrossRefPubMedGoogle Scholar
 7.Knott G, Marchman H, Wall D, Lich B. Serial section scanning electron microscopy of adult brain tissue using focused ion beam milling. J Neurosci. 2008; 28(12):2959–64.CrossRefPubMedGoogle Scholar
 8.Hayworth KJ, Kasthuri N, Schalek R. Automating the collection of ultrathin serial sections for large volume TEM reconstructions. Microsc Microanal. 2006; 12(S02):86–7.CrossRefGoogle Scholar
 9.Lee PC, Chuang CC, Chiang AS, Ching YT. Highthroughput computer method for 3d neuronal structure reconstruction from the image stack of the Drosophila brain and its applications. PLoS Comput Biol. 2012; 8(9):e1002658.CrossRefPubMedPubMedCentralGoogle Scholar
 10.Mishchenko Y, Hu T, Spacek J, Mendenhall J, Harris KM Chklovskii ADB. Ultrastructural analysis of hippocampal neuropil from the connectomics perspective. Neuron. 2010; 67(6):1009–20.CrossRefPubMedPubMedCentralGoogle Scholar
 11.Denk W, Horstmann H. Serial blockface scanning electron microscopy to reconstruct threedimensional tissue nanostructure. PLoS Biol. 2004; 2(11):1900–9.CrossRefGoogle Scholar
 12.Navlakha S, Suhan J, Barth AL, Barjoseph Z. A highthroughput framework to detect synapses in electron microscopy images. J Bioinform. 2013; 29(13):9–17.CrossRefGoogle Scholar
 13.Jagadeesh V, Anderson J, Jones B, Marc R, Fisher S, Manjunath BS. Synapse classification and localization in Electron Micrographs. Pattern Recognit Lett. 2014; 43(1):17–24.CrossRefGoogle Scholar
 14.Kreshuk A, Straehle CN, Sommer C, Koethe U, Knott G, Hamprecht FA. Automated segmentation of synapses in 3D EM data. In: The IEEE International Symposium on Biomedical Imaging. Chicago: IEEE: 2011. p. 220–3.Google Scholar
 15.Becker C, Ali K, Knott G, Fua P. Learning context cues for synapse segmentation. IEEE Trans Med Imaging. 2013; 32(10):1864–77.CrossRefPubMedGoogle Scholar
 16.Kreshuk A, Koethe U, Pax E, Bock DD, Hamprecht FA. Automated Detection of Synapses in Serial Section Transmission Electron Microscopy Image Stacks. PLoS ONE. 2014; 9(2):e87351.CrossRefPubMedPubMedCentralGoogle Scholar
 17.Harris KM, Perry E, Bourne J, Feinberg M, Ostroff L, Hurlburt J. Uniform serial sectioning for transmission electron microscopy. J Neurosci. 2006; 26(47):12101–3.CrossRefPubMedGoogle Scholar
 18.Sun M, Zhang D, Guo H, Deng H, Li W, Xie Q. 3Dreconstruction of synapses based on EM images. In: International Conference on Materials Applications and Engineering. Harbin: IEEE: 2016. p. 1959–1964.Google Scholar
 19.Roberts M, Jeong WK, VazquezReina A, Unger M, Bischof H, Lichtman J, Pfister H. Neural process reconstruction from sparse user scribbles. In: Medical Image Computing and Computer Assisted Interventions Conference. Toronto: Springer: 2011. p. 621–628.Google Scholar
 20.Ciresan DC, Giusti A, Gambardella LM, Schmidhuber J. Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images. In: Conference and Workshop on Neural Information Processing Systems. Lake Tahoe: NIPS Foundation: 2012. p. 2852–2860.Google Scholar
 21.Beier T, Pape C, Rahaman N, et al. Multicut brings automated neurite segmentation closer to human performance. Nat Methods. 2017; 14(2):101–2.CrossRefPubMedGoogle Scholar
 22.Rao Q, Xiao C, Han H, Chen X, Shen L, Xie Q. Deep learning and shapes similarity for joint segmentation and tracing single neurons in SEM images. In: SPIE Medical Imaging. Orlando: SPIE: 2017. p. 1013329.Google Scholar
 23.Ronneberger O, Fischer P, Brox T. UNet: Convolutional Networks for Biomedical Image Segmentation. In: Medical Image Computing and Computer Assisted Interventions Conference. Munich: Springer: 2015. p. 234–241.Google Scholar
 24.Roncal WG, Pekala M, Kaynigfittkau V, et al. VESICLE: Volumetric Evaluation of Synaptic Inferfaces using Computer vision at Large Scale. In: British Machine Vision Conference. Swansea: Elsevier: 2015. p. 81.1–81.13.Google Scholar
 25.Staffler B, Berning M, Boergens KM, Gour A, Van dSP, Helmstaedter M. SynEM, automated synapse detection for connectomics. Elife. 2017; 6:e26414.CrossRefPubMedPubMedCentralGoogle Scholar
 26.Dorkenwald S, Schubert PJ, Killinger MF, et al. Automated synaptic connectivity inference for volume electron microscopy. Nat Methods. 2017; 14(4):435–42.CrossRefPubMedGoogle Scholar
 27.Ren S, He K, Girshick R, Sun J. Faster RCNN: Towards RealTime Object Detection with Region Proposal Networks. IEEE Transac Pattern Anal Mach Intell. 2017; 39(6):1137–49.CrossRefGoogle Scholar
 28.Dial R, Glover F, Karney D. Shortest path forest with topological ordering: An algorithm description in SDL. Transp Res Part B Methodol. 1980; 14(4):343–7.CrossRefGoogle Scholar
 29.Rother C, Kolmogorov V, Blake A. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Trans Graph. 2004; 23(3):309–14.CrossRefGoogle Scholar
 30.Schmid B, Schindelin J, Cardona A, Longair M, Heisenberg M. A highlevel 3D visualization API for Java and ImageJ. BMC Bioinformatics. 2010; 11(1):274–80.CrossRefPubMedPubMedCentralGoogle Scholar
 31.Liu C, Yuen J, Torralba A, Sivic J, Freeman WT. SIFT Flow: Dense Correspondence across Different Scenes. In: European Conference on Computer Vision. Marseille: Springer: 2008. p. 28–42.Google Scholar
 32.Schaefer S, McPhail T, Warren J. Image deformation using moving least squares. In: ACM Transactions on Graphics (TOG). Boston: ACM: 2006. p. 533–40.Google Scholar
 33.Lia X, Jia G, Chen X, Ding W, Sun L, Xua W, Han H, Sun F. Large scale threedimensional reconstruction of an entire Caenorhabditis elegans larva using AutoCUTSSEM. J Struct Biol. 2017; 200:87–96.CrossRefGoogle Scholar
 34.Girshick R. Fast RCNN. In: IEEE International Conference on Computer Vision. Santiago: IEEE: 2015. p. 1440–1448.Google Scholar
 35.Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093. 2014.Google Scholar
 36.Zeiler MD, Fergus R. Visualizing and Understanding Convolutional Networks. In: European Conference on Computer Vision. Zurich: Springer: 2014. p. 818–833.Google Scholar
 37.Simonyan K, Zisserman A. Very Deep Convolutional Networks for LargeScale Image Recognition. In: International Conference on Learning Representations. Banff: Springer: 2014. p. 1–14.Google Scholar
 38.He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. In: IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE: 2016. p. 770–8.Google Scholar
 39.Kandel ER, Schwartz JH, Jessell TM. Principles of Neural Science, 4th Edn. McGrawHill xli. 2000; 50(6):823–39.Google Scholar
 40.Stauffer C, Grimson WEL. Adaptive Background Mixture Models for RealTime Tracking. In: IEEE Conference on Computer Vision and Pattern Recognition. Fort Collins: IEEE: 1999. p. 246–52.Google Scholar
 41.Boykov Y, Jolly M. Interactive Graph Cuts for Optimal Boundary and Region Segmentation of Objects in ND Images. In: EEE International Conference on Computer Vision: 2001. p. 105–12.Google Scholar
 42.Lucchi A, Smith K, Achanta R, Knott G, Fua P. Supervoxel Based Segmentation of Mitochondria in EM Image Stacks with Learned Shape Features. IEEE Transac Med Imaging. 2012; 31(2):474–86.CrossRefGoogle Scholar
 43.Li W, Deng H, Rao Q, Xie Q, Chen X, Han H. An automated pipeline for mitochondrial segmentation on ATUMSEM stacks. J Bioinform Comput Biol. 2017; 3:750015.Google Scholar
 44.Slossberg R, Wetzler A, Kimmel R. Deep Stereo Matching with Dense CRF Priors. arXiv:1612.01725v2. 2016.Google Scholar
 45.Çiçek Ö, Abdulkadir A, Lienkamp SS, et al. 3D UNet: learning dense volumetric segmentation from sparse annotation. In: International Conference on Medical Image Computing and ComputerAssisted Intervention. Athens: Springer: 2016. p. 424–32.Google Scholar
 46.Dou Q, Chen H, Jin Y, et al. 3d deeply supervised network for automatic liver segmentation from ct volumes. In: International Conference on Medical Image Computing and ComputerAssisted Intervention. Athens: Springer: 2016. p. 149–157.Google Scholar
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.