Stem cell motion-tracking by using deep neural networks with multi-output
- 454 Downloads
The aim of automated stem cell motility analysis is reliable processing and evaluation of cell behaviors such as translocation, mitosis, death, and so on. Cell tracking plays an important role in this research. In practice, tracking stem cells is difficult because they have frequent motion, deformation activities, and small resolution sizes in microscopy images. Previous tracking approaches designed to address this problem have been unable to generalize the rapid morphological deformation of cells in a complex living environment, especially for real-time tracking tasks. Herein, a deep learning framework with convolutional structure and multi-output layers is proposed for overcoming stem cell tracking problems. A convolutional structure is used to learn robust cell features through deep features learned on massive visual data by a transfer learning strategy. With multi-output layers, this framework tracks the cell’s motion and simultaneously detects its mitosis as an assistant task. This improves the generalization ability of the model and facilitates practical applications for stem cell research. The proposed framework, tracking and detection neural networks, also contains a particle filter-based motion model, a specialized cell sampling strategy, and corresponding model update strategy. Its current application to a microscopy image dataset of human stem cells demonstrates increased tracking performance and robustness compared with other frequently used methods. Moreover, mitosis detection performance was verified against manually labeled mitotic events of the tracked cell. Experimental results demonstrate good performance of the proposed framework for addressing problems associated with stem cell tracking.
KeywordsCell tracking Neural networks Mitosis detection Multi-output
This work was supported by the Key Program of National Natural Science Foundation of China (61402306, 61432012, U1435213).
Compliance with ethical standards
Conflict of interest
We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service, and company that could be construed as influencing the position presented in or the review of this manuscript.
- 2.Dimarakis I, Levicar N (2007) Cell culture medium composition and translational adult bone marrow-derived stem cell research. Stem Cells 24(12):2888–2890Google Scholar
- 5.Bise R, Yin Z, Kanade T (2011) Reliable cell tracking by global data association. In: Proceedings of 2011 IEEE international symposium on biomedical imaging: from nano to macro, vol 48, pp 1004–1010Google Scholar
- 10.Ren Y, Xu B, Zhang J, Zhang W (2015) A generalized data association approach for cell tracking in high-density population, In: Proceedings of IEEE international conference on control, automation and information sciences (ICCAIS), pp 502–507Google Scholar
- 12.Lou X, Hamprecht FA (2011) Structured learning for cell tracking. In: Advances in neural information processing systems, pp 1296–1304Google Scholar
- 17.Wu Y, Lim J, Yang MH (2013) Online object tracking: A benchmark. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 2411–2418Google Scholar
- 20.Zhang H, Cao X, Ho JKL, Chow TWS (2016) Object-level video advertising: an optimization framework. IEEE Trans Ind Inform 99:1Google Scholar
- 22.Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, vol 25, no 2Google Scholar
- 23.Wei J, Li XP, Sessler AM (2011) Mitosis detection for stem cell tracking in phase-contrast microscopy images 48(1):2121–2127Google Scholar
- 24.Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. In: Proceedings of IEEE conference on computer vision and pattern recognitionGoogle Scholar
- 26.Wang N, Yeung DY (2013) Learning a deep compact image representation for visual tracking. In: Advances in neural information processing systems, pp 809–817Google Scholar
- 29.Abouelnaga Y, Ali OS, Rady H, Moustafa M (2016) Cifar-10: Knn-based ensemble of classifiers. In: Proceedings of international conference on computational science and computational intelligenceGoogle Scholar
- 30.Carvalho EF, Engel PM (2014) Convolutional sparse feature descriptor for object recognition in cifar-10. In: Intelligent systems, pp 131–135Google Scholar
- 33.Ahuja N (2012) Robust visual tracking via multi-task sparse learning. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 2042–2049Google Scholar
- 34.Kwon J, Lee KM (2010) Visual tracking decomposition. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 1269–1276Google Scholar