Accurate and High Throughput Cell Segmentation Method for Mouse Brain Nuclei Using Cascaded Convolutional Neural Network
Recent innovations in tissue clearing and light sheet microscopy allow rapid acquisition of three-dimensional micron resolution images in fluorescently labeled brain samples. These data allow the observation of every cell in the brain, necessitating an accurate and high-throughput cell segmentation method in order to perform basic operations like counting number of cells within a region; however, large computational challenges given noise in the data and sheer number of features to identify. Inspired by the success of deep learning technique in medical imaging, we propose a supervised learning approach using convolution neural network (CNN) to learn the non-linear relationship between local image appearance (within an image patch) and manual segmentations (cell or background at the center of the underlying patch). In order to improve the segmentation accuracy, we further integrate high-level contextual features with low-level image appearance features. Specifically, we extract contextual features from the probability map of cells (output of current CNN) and train the next CNN based on both patch-wise image appearance and contextual features, extending previous methods into a cascaded approach. Using (a) high-level contextual features extracted from the cell probability map and (b) the spatial information of cell-to-cell locations, our cascaded CNN progressively improves the segmentation accuracy. We have evaluated the segmentation results on mouse brain images, and compared conventional image processing approaches. More accurate and robust segmentation results have been achieved with our cascaded CNN method, indicating the promising potential of our proposed cell segmentation method for use in large tissue cleared images.
KeywordsConvolutional neural network Contextual feature Cascade learning Cell segmentation Mouse microscopy image
The research is supported by the National Science Foundation (NSF 1649916). The first author is supported by the China Scholarship Council for one year’s visiting at the University of North Carolina at Chapel Hill.
- 1.Renier, N., Adams, E.L., Kirst, C., Wu, Z., Azevedo, R., Kohl, J., Autry, A.E., Kadiri, L., Venkataraju, K.U., Zhou, Y., Wang, V.X., Tang, C.Y., Olsen, O., Dulac, C., Osten, P., Tessier-Lavigne, M.: Mapping of brain activity by automated volume analysis of immediate early genes. Cell 165, 1789–1802 (2016)CrossRefGoogle Scholar
- 5.Lee, H., Grosse, R., Ranganath, R., Ng, A.Y.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th Annual International Conference on Machine Learning. pp. 609–616. ACM, Montreal, Quebec, Canada (2009)Google Scholar
- 6.Liu, F., Yang, L.: A novel cell detection method using deep convolutional neural network and maximum-weight independent set. In: Navab, N., Hornegger, J., Wells, William M., Frangi, Alejandro F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 349–357. Springer, Cham (2015). doi: 10.1007/978-3-319-24574-4_42 CrossRefGoogle Scholar
- 7.Arnold, L., Rebecchi, S., Chevallier, S., Paugam-moisy, H.: An introduction to deep-learning. In: European Symposium on Artificial Neural Networks in Computational Intelligence and Machine Learning (ESANN) (2011)Google Scholar
- 8.Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. Arxiv arXiv:1206.5538 (2012)
- 9.Kim, M., Wu, G., Guo, Y., Shen, D.: Joint labeling of multiple Regions of Interest (ROIs) by enhanced auto context models. In: 2015 IEEE International Symposium on Biomedical Imaging (ISBI), New York (2015)Google Scholar
- 10.Tu, Z., Bai, X.: Auto-context and its application to high-level vision tasks and 3D brain image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 21, 1744–1757 (2010)Google Scholar