Skip to main content
Log in

Curriculum Learning: A Survey

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Training machine learning models in a meaningful order, from the easy samples to the hard ones, using curriculum learning can provide performance improvements over the standard training approach based on random data shuffling, without any additional computational costs. Curriculum learning strategies have been successfully employed in all areas of machine learning, in a wide range of tasks. However, the necessity of finding a way to rank the samples from easy to hard, as well as the right pacing function for introducing more difficult data can limit the usage of the curriculum approaches. In this survey, we show how these limits have been tackled in the literature, and we present different curriculum learning instantiations for various tasks in machine learning. We construct a multi-perspective taxonomy of curriculum learning approaches by hand, considering various classification criteria. We further build a hierarchical tree of curriculum learning methods using an agglomerative clustering algorithm, linking the discovered clusters with our taxonomy. At the end, we provide some interesting directions for future work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Allgower, E. L., & Georg, K. (2003). Introduction to numerical continuation methods. SIAM. https://doi.org/10.1137/1.9780898719154.fm

    Article  MATH  Google Scholar 

  • Almeida, J., Saltori, C., Rota, P., & Sebe, N. (2020). Low-budget unsupervised label query through domain alignment enforcement. arXiv:2001.00238

  • Alsharid, M., El-Bouri, R., Sharma, H., Drukker, L., Papageorghiou, A. T., & Noble, J. A. (2020) A curriculum learning based approach to captioning ultrasound images. In Proceedings of ASMUS and PIPPI (pp. 75–84).

  • Amodei, D., Ananthanarayanan, S., Anubhai, R., Bai, J., Battenberg, E., Case, C., Casper, J., Catanzaro, B., Cheng, Q., Chen, G., et al. (2016). Deep speech 2: End-to-end speech recognition in English and Mandarin. In Proceedings of ICML (pp. 173–182).

  • Bao, S., He, H., Wang, F., Wu, H., Wang, H., Wu, W., Guo, Z., Liu, Z., & Xu, X. (2020). Plato-2: Towards building an open-domain chatbot via curriculum learning. arXiv:2006.16779

  • Bassich, A., & Kudenko, D. (2019). Continuous curriculum learning for reinforcement learning. In Proceedings of SURL.

  • Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009). Curriculum learning. In Proceedings of ICML (pp. 41–48).

  • Braun, S., Neil, D., & Liu, S. C. (2017). A curriculum learning method for improved noise robustness in automatic speech recognition. In Proceedings of EUSIPCO (pp. 548–552).

  • Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). Language models are few-shot learners. In Proceedings of NeurIPS-2020, vol. 33 (pp. 1877–1901).

  • Burduja, M., & Ionescu, R. T. (2021). Unsupervised medical image alignment with curriculum learning. In Proceedings of ICIP (pp. 3787–3791).

  • Burduja, M., Ionescu, R. T., & Verga, N. (2020). Accurate and efficient intracranial hemorrhage detection and subtype classification in 3D CT scans with convolutional and long short-term memory neural networks. Sensors, 20(19), 5611.

    Article  Google Scholar 

  • Buyuktas, B., Erdem, C. E., & Erdem, A. (2020). Curriculum learning for face recognition. In Proceedings of EUSIPCO.

  • Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., & Zagoruyko, S. (2020). End-to-end object detection with transformers. In Proceedings of ECCV (pp. 213–229). Springer.

  • Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., & Joulin, A. (2021). Emerging properties in self-supervised vision transformers. In Proceedings of ICCV (pp. 9650–9660).

  • Cascante-Bonilla, P., Tan, F., Qi, Y., & Ordonez, V. (2020). Curriculum labeling: Self-paced pseudo-labeling for semi-supervised learning. arXiv:2001.06001

  • Castells, T., Weinzaepfel, P., & Revaud, J. (2020). SuperLoss: A generic loss for robust curriculum learning. In Proceedings of NeurIPS, vol. 33 (pp. 4308–4319).

  • Caubrière, A., Tomashenko, N. A., Laurent, A., Morin, E., Camelin, N., & Estève, Y. (2019). Curriculum-based transfer learning for an effective end-to-end spoken language understanding and domain portability. In Proceedings of INTERSPEECH (pp. 1198–1202).

  • Chang, E., Yeh, H. S., & Demberg, V. (2021). Does the order of training samples matter? improving neural data-to-text generation with curriculum learning. In Proceedings of EACL (pp. 727–733).

  • Chang, H. S., Learned-Miller, E., & McCallum, A. (2017). Active bias: Training more accurate neural networks by emphasizing high variance samples. In Proceedings of NIPS (pp. 1002–1012).

  • Chen, J., He, Y., Frey, E. C., Li, Y., & Du, Y. (2021a). ViT-V-Net: Vision transformer for unsupervised volumetric medical image registration. arXiv:2104.06468

  • Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A. L., & Zhou, Y. (2021b). TransUNet: Transformers make strong encoders for medical image segmentation. arXiv:2102.04306

  • Chen, X., & Gupta, A. (2015). Webly supervised learning of convolutional networks. In Proceedings of ICCV (pp 1431–1439).

  • Chen, Y., Shi, F., Christodoulou, A. G., Xie, Y., Zhou, Z., & Li, D. (2018). Efficient and accurate MRI super-resolution using a generative adversarial network and 3D multi-level densely connected network. In Proceedings of MICCAI (pp. 91–99).

  • Cheng, H., Lian, D., Deng, B., Gao, S., Tan, T., & Geng, Y. (2019). Local to global learning: Gradually adding classes for training deep neural networks. In Proceedings of CVPR (pp. 4748–4756).

  • Choi, J., Jeong, M., Kim, T., & Kim, C. (2019). Pseudo-labeling curriculum for unsupervised domain adaptation. In Proceedings of BMVC.

  • Chow, J., Udpa, L., & Udpa, S. (1991). Homotopy continuation methods for neural networks. In Proceedings of ISCAS (pp. 2483–2486).

  • Cirik, V., Hovy, E., & Morency, L. P. (2016). Visualizing and understanding curriculum learning for long short-term memory networks. arXiv:1611.06204

  • Dai, D., Sakaridis, C., Hecker, S., & Van Gool, L. (2020). Curriculum model adaptation with synthetic and real data for semantic foggy scene understanding. International Journal of Computer Vision, 128(5), 1182–1204.

    Article  Google Scholar 

  • Devlin, J., Chang, MW., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL (pp. 4171–4186).

  • Doan, T., Monteiro, J., Albuquerque, I., Mazoure, B., Durand, A., Pineau, J., & Hjelm, R. D. (2019). On-line adaptative curriculum learning for GANs. In Proceedings of AAAI, vol. 33 (pp. 3470–3477).

  • Dogan, Ü., Deshmukh, A. A., Machura, M. B., & Igel, C. (2020). Label-similarity curriculum learning. In Proceedings of ECCV (pp. 174–190).

  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of ICLR.

  • Duan, Y., Zhu, H., Wang, H., Yi, L., Nevatia, R., & Guibas, L. J. (2020). Curriculum DeepSDF. In Proceedings of ECCV (pp. 51–67).

  • Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48(1), 71–99.

    Article  Google Scholar 

  • Eppe, M., Magg, S., & Wermter, S. (2019). Curriculum goal masking for continuous deep reinforcement learning. In Proceedings of ICDL-EpiRob (pp 183–188).

  • Fan, Y., He, R., Liang, J., & Hu, B. (2017). Self-paced learning: An implicit regularization perspective. In Proceedings of AAAI (pp. 1877–1883).

  • Fang, M., Zhou, T., Du, Y., Han, L., & Zhang, Z. (2019). Curriculum-guided hindsight experience replay. In Proceedings of NeurIPS (pp. 12623–12634).

  • Feng, D., Gomes, C. P., & Selman, B. (2020a). A novel automated curriculum strategy to solve hard Sokoban planning instances. In Proceedings of NeurIPS (pp. 3141–3152).

  • Feng, Z., Zhou, Q., Cheng, G., Tan, X., Shi, J., & Ma, L. (2020b). Semi-supervised semantic segmentation via dynamic self-training and class-balanced curriculum. arXiv:2004.08514

  • Florensa, C., Held, D., Wulfmeier, M., Zhang, M., & Abbeel, P. (2017). Reverse curriculum generation for reinforcement learning. In Proceedings of CoRL, vol. 78 (pp. 482–495).

  • Foglino, F., Leonetti, M., Sagratella, S., & Seccia, R. (2019). A gray-box approach for curriculum learning. In Proceedings of WCGO (pp. 720–729).

  • Fournier, P., Colas, C., Chetouani, M., & Sigaud, O. (2019). CLIC: Curriculum learning and imitation for object control in non-rewarding environments. IEEE Transactions on Cognitive and Developmental Systems, 13(2), 239–248.

    Article  Google Scholar 

  • Ganesh, M. R., & Corso, J. J. (2020). Rethinking curriculum learning with incremental labels and adaptive compensation. arXiv:2001.04529

  • Gao, Y., Zhou, M., & Metaxas, D. (2021). UTNet: A hybrid transformer architecture for medical image segmentation. In Proceedings of MICCAI

  • Georgescu, M. I., Barbalau, A., Ionescu, R. T., Khan, F. S., Popescu, M., & Shah, M. (2020). Anomaly detection in video via self-supervised and multi-task learning. arXiv:2011.07491

  • Ghasedi, K., Wang, X., Deng, C., & Huang, H. (2019). Balanced self-paced learning for generative adversarial clustering network. In Proceedings of CVPR (pp. 4391–4400).

  • Gong, C., Tao, D., Maybank, S. J., Liu, W., Kang, G., & Yang, J. (2016). Multi-modal curriculum learning for semi-supervised image classification. IEEE Transactions on Image Processing, 25(7), 3249–3260.

    Article  MathSciNet  MATH  Google Scholar 

  • Gong, M., Li, H., Meng, D., Miao, Q., & Liu, J. (2018). Decomposition-based evolutionary multiobjective optimization to self-paced learning. IEEE Transactions on Evolutionary Computation, 23(2), 288–302.

    Article  Google Scholar 

  • Gong, Y., Liu, C., Yuan, J., Yang, F., Cai, X., Wan, G., Chen, J., Niu, R., & Wang, H. (2021). Density-based dynamic curriculum learning for intent detection. In Proceedings of CIKM (pp. 3034–3037).

  • Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., Colmenarejo, S. G., Grefenstette, E., Ramalho, T., Agapiou, J., et al. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626), 471–476.

    Article  Google Scholar 

  • Graves, A., Bellemare, M. G., Menick, J., Munos, R., & Kavukcuoglu, K. (2017). Automated curriculum learning for neural networks. In Proceedings of ICML, vol. 70 (pp. 1311–1320).

  • Gui, L., Baltrušaitis, T., & Morency, L. P. (2017). Curriculum learning for facial expression recognition. In Proceedings of FG (pp. 505–511).

  • Guo, J., Tan, X., Xu, L., Qin, T., Chen, E., & Liu, T. Y. (2020). Fine-tuning by curriculum learning for non-autoregressive neural machine translation. In Proceedings of AAAI, vol. 34 (pp. 7839–7846).

  • Guo, S., Huang, W., Zhang, H., Zhuang, C., Dong, D., Scott, M. R., & Huang, D. (2018). CurriculumNet: Weakly supervised learning from large-scale web images. In Proceedings of ECCV (pp. 135–150).

  • Guo, Y., Chen, Y., Zheng, Y., Zhao, P., Chen, J., Huang, J., & Tan, M. (2020b). Breaking the curse of space explosion: Towards efficient NAS with curriculum search. In Proceedings of ICML (pp. 3822–3831).

  • Hacohen, G., & Weinshall, D. (2019). On the power of curriculum learning in training deep networks. In Proceedings of ICML, vol. 97 (pp. 2535–2544).

  • Hatamizadeh, A., Yang, D., Roth, H., & Xu, D. (2021). UNETR: Transformers for 3D medical image segmentation. arXiv:2103.10504

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of CVPR (pp. 770–778).

  • He, Z., Gu, C., Xu, R., & Wu, K.(2020). Automatic curriculum generation by hierarchical reinforcement learning. In Proceedings of ICONIP (pp. 202–213).

  • Hu, D., Wang, Z., Xiong, H., Wang, D., Nie, F., & Dou, D. (2020). Curriculum audiovisual learning. arXiv:2001.09414

  • Huang, R., Hu, H., Wu, W., Sawada, K., & Zhang, M. (2020a). Dance revolution: Long sequence dance generation with music via curriculum learning. arXiv:2006.06119

  • Huang, Y., & Du, J. (2019). Self-attention enhanced cnns and collaborative curriculum learning for distantly supervised relation extraction. In Proceedings of EMNLP (pp. 389–398).

  • Huang, Y., Wang, Y., Tai, Y., Liu, X., Shen, P., Li, S., Li, J., & Huang, F. (2020b). CurricularFace: Adaptive curriculum learning loss for deep face recognition. In Proceedings of CVPR (pp. 5901–5910).

  • Ionescu, R., Alexe, B., Leordeanu, M., Popescu, M., Papadopoulos, D. P., & Ferrari, V. (2016). How hard can it be? Estimating the difficulty of visual search in an image. In Proceedings of CVPR (pp. 2157–2166).

  • Jaegle, A., Gimeno, F., Brock, A., Zisserman, A., Vinyals, O., & Carreira, J. (2021). Perceiver: General perception with iterative attention. In Proceedings of ICML.

  • Jafarpour, B., Sepehr, D., & Pogrebnyakov, N. (2021). Active curriculum learning. In Proceedings of InterNLP (pp. 40–45).

  • Jesson, A., Guizard, N., Ghalehjegh, SH., Goblot, D., Soudan, F., & Chapados, N. (2017). CASED: Curriculum adaptive sampling for extreme data imbalance. In Proceedings of MICCAI (pp. 639–646).

  • Jiang, L., Meng, D., Mitamura, T., & Hauptmann, A. G. (2014a). Easy samples first: Self-paced reranking for zero-example multimedia search. In Proceedings of ACMMM (pp. 547–556).

  • Jiang, L., Meng, D., Yu, S. I., Lan, Z., Shan, S., & Hauptmann, A. (2014b). Self-paced learning with diversity. In Proceedings of NIPS (pp. 2078–2086).

  • Jiang, L., Meng, D., Zhao, Q., Shan, S., & Hauptmann, A. G. (2015). Self-paced curriculum learning. In Proceedings of AAAI (pp. 2694–2700).

  • Jiang, L., Zhou, Z., Leung, T., Li, L. J., & Fei-Fei, L. (2018). MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In Proceedings of ICML (pp. 2304–2313).

  • Jiménez-Sánchez, A., Mateus, D., Kirchhoff, S., Kirchhoff, C., Biberthaler, P., Navab, N., Ballester, M. A. G., & Piella, G. (2019). Medical-based deep curriculum learning for improved fracture classification. In Proceedings of MICCAI (pp. 694–702).

  • Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2018). Progressive growing of GANs for improved quality, stability, and variation. In Proceedings of ICLR.

  • Khan, S., Naseer, M., Hayat, M., Zamir, S. W., Khan, F. S., & Shah, M. (2021). Transformers in vision: A survey. arXiv:2101.01169

  • Kim, D., Bae, J., Jo, Y., & Choi, J. (2019). Incremental learning with maximum entropy regularization: Rethinking forgetting and intransigence. arXiv:1902.00829

  • Kim, T. H., & Choi, J. (2018). ScreenerNet: Learning self-paced curriculum for deep neural networks. arXiv:1801.00904

  • Kim, Y., Jernite, Y., Sontag, D., & Rush, A. M. (2016). Character-aware neural language models. In Proceedings of AAAI (pp. 2741–2749).

  • Klink, P., Abdulsamad, H., Belousov, B., & Peters, J. (2020). Self-paced contextual reinforcement learning. In Proceedings of CoRL (pp. 513–529).

  • Kocmi, T., & Bojar, O. (2017). Curriculum learning and minibatch bucketing in neural machine translation. In Proceedings of RANLP (pp. 379–386).

  • Korkmaz, Y., Dar, S. U., Yurt, M., Özbey, M., & Çukur, T. (2021). Unsupervised MRI reconstruction via zero-shot learned adversarial transformers. arXiv:2105.08059

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In Proceedings of NIPS (pp. 1106–1114).

  • Kumar, G., Foster, G., Cherry, C., & Krikun, M. (2019). Reinforcement learning based curriculum optimization for neural machine translation. In Proceedings of NAACL (pp. 2054–2061).

  • Kumar, M. P., Packer, B., & Koller, D. (2010). Self-paced learning for latent variable models. In Proceedings of NIPS (pp. 1189–1197).

  • Kumar, M. P., Turki, H., Preston, D., & Koller, D. (2011). Learning specific-class segmentation from diverse data. In Proceedings of ICCV (pp. 1800–1807).

  • Kuo, W., Häne, C., Mukherjee, P., Malik, J., & Yuh, E. L. (2019). Expert-level detection of acute intracranial hemorrhage on head computed tomography using deep learning. Proceedings of the National Academy of Sciences, 116(45), 22737–22745.

    Article  Google Scholar 

  • Lee, Y. J., & Grauman, K. (2011). Learning the easy things first: Self-paced visual category discovery. In Proceedings of CVPR (pp. 1721–1728).

  • Li, B., Liu, T., Wang, B., & Wang, L. (2020). Label noise robust curriculum for deep paraphrase identification. In Proceedings of IJCNN (pp. 1–8).

  • Li, C., Wei, F., Yan, J., Zhang, X., Liu, Q., & Zha, H. (2017). A self-paced regularization framework for multilabel learning. IEEE Transactions on Neural Networks and Learning Systems, 29(6), 2660–2666.

    Article  MathSciNet  Google Scholar 

  • Li, C., Yan, J., Wei, F., Dong, W., Liu, Q., & Zha, H. (2017b). Self-paced multi-task learning. In Proceedings of AAAI (pp. 2175–2181).

  • Li, H., & Gong, M. (2017). Self-paced convolutional neural networks. In Proceedings of IJCAI (pp. 2110–2116).

  • Li, H., Gong, M., Meng, D., & Miao, Q. (2016). Multi-objective self-paced learning. In Proceedings of AAAI (pp. 1802–1808).

  • Li, S., Zhu, X., Huang, Q., Xu, H., & Kuo, C. C. J. (2017c). Multiple instance curriculum learning for weakly supervised object detection. In Proceedings of BMVC. BMVA Press.

  • Liang, J., Jiang, L., Meng, D., & Hauptmann, A. G. (2016). Learning to detect concepts from webly-labeled video data. In Proceedings of IJCAI (pp. 1746–1752).

  • Lin, L., Wang, K., Meng, D., Zuo, W., & Zhang, L. (2017). Active self-paced learning for cost-effective and progressive face identification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(1), 7–19.

    Article  Google Scholar 

  • Liu, C., He, S., Liu, K., & Zhao, J. (2018). Curriculum learning for natural answer generation. In Proceedings of IJCAI (pp. 4223–4229).

  • Liu, F., Ge, S., & Wu, X. (2021). Competence-based multimodal curriculum learning for medical report generation. In Proceedings of ACL-IJCNLP (pp. 3001–3012). Association for Computational Linguistics.

  • Liu, J., Ren, Y., Tan, X., Zhang, C., Qin, T., Zhao, Z., & Liu, T. (2020a). Task-level curriculum learning for non-autoregressive neural machine translation. In Proceedings of IJCAI (pp. 3861–3867).

  • Liu, X., Lai, H., Wong, D. F., & Chao, L. S. (2020b). Norm-based curriculum learning for neural machine translation. In Proceedings of ACL.

  • Lotfian, R., & Busso, C. (2019). Curriculum learning for speech emotion recognition from crowdsourced labels. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27(4), 815–826.

    Article  Google Scholar 

  • Lotter, W., Sorensen, G., & Cox, D. (2017). A multi-scale CNN and curriculum learning strategy for mammogram classification. In Proceedings of DLMIA and ML-CDS (pp. 169–177).

  • Luo, S., Kasaei, SH., & Schomaker, L. (2020). Accelerating reinforcement learning for reaching using continuous curriculum learning. In Proceedings of IJCNN (pp. 1–8).

  • Luthra, A., Sulakhe, H., Mittal, T., Iyer, A., & Yadav, S. (2021). Eformer: Edge enhancement based transformer for medical image denoising. In Proceedings of ICCVW.

  • Ma, F., Meng, D., Xie, Q., Li, Z., & Dong, X. (2017). Self-paced co-training. In Proceedings of ICML (pp. 2275–2284).

  • Ma, Z., Liu, S., Meng, D., Zhang, Y., Lo, S., & Han, Z. (2018). On convergence properties of implicit self-paced objective. Information Sciences, 462, 132–140.

    Article  MathSciNet  MATH  Google Scholar 

  • Manela, B., & Biess, A. (2022). Curriculum learning with hindsight experience replay for sequential object manipulation tasks. Neural Networks, 145, 260–270.

    Article  Google Scholar 

  • Matiisen, T., Oliver, A., Cohen, T., & Schulman, J. (2019). Teacher-student curriculum learning. IEEE Transactions on Neural Networks and Learning Systems, 31(9), 3732–3740.

    Article  Google Scholar 

  • McCloskey, M., & Cohen, NJ. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, vol. 24 (pp. 109–165). Elsevier.

  • Milano, N., & Nolfi, S. (2021). Automated curriculum learning for embodied agents a neuroevolutionary approach. Scientific Reports, 11(1), 1–14.

    Article  Google Scholar 

  • Mitchell, T. M. (1997). Machine learning. New York: McGraw-Hill.

    MATH  Google Scholar 

  • Morerio, P., Cavazza, J., Volpi, R., Vidal, R., & Murino, V. (2017). Curriculum dropout. In Proceedings of ICCV (pp. 3544–3552).

  • Murali, A., Pinto, L., Gandhi, D., & Gupta, A. (2018). CASSL: Curriculum accelerated self-supervised learning. In Proceedings of ICRA (pp. 6453–6460).

  • Nabli, A., & Carvalho, M. (2020). Curriculum learning for multilevel budgeted combinatorial problems. In Proceedings of NeurIPS (pp. 7044–7056).

  • Narvekar, S., & Stone, P. (2019). Learning curriculum policies for reinforcement learning. In Proceedings of AAMAS (pp. 25—33).

  • Narvekar, S., Sinapov, J., Leonetti, M., & Stone, P. (2016). Source task creation for curriculum learning. In Proceedings of AAMAS (pp. 566–574).

  • Narvekar, S., Peng, B., Leonetti, M., Sinapov, J., Taylor, M. E., & Stone, P. (2020). Curriculum learning for reinforcement learning domains: A framework and survey. Journal of Machine Learning Research, 21, 1–50.

    MathSciNet  MATH  Google Scholar 

  • Oksuz, I., Ruijsink, B., Puyol-Antón, E., Clough, J. R., Cruz, G., Bustin, A., Prieto, C., Botnar, R., Rueckert, D., Schnabel, J. A., et al. (2019). Automatic CNN-based detection of cardiac MR motion artefacts using k-space data augmentation and curriculum learning. Medical Image Analysis, 55, 136–147.

    Article  Google Scholar 

  • Pathak, H. N., & Paffenroth, R. (2019). Parameter continuation methods for the optimization of deep neural networks. In Proceedings of ICMLA (pp. 1637–1643).

  • Penha, G., & Hauff, C. (2019). Curriculum learning strategies for IR: An empirical study on conversation response ranking. arXiv:1912.08555

  • Pentina, A., Sharmanska, V., & Lampert, CH. (2015). Curriculum learning of multiple tasks. In Proceedings of CVPR (pp. 5492–5500).

  • Pi, T., Li, X., Zhang, Z., Meng, D., Wu, F., Xiao, J., & Zhuang, Y. (2016). Self-paced boost learning for classification. In Proceedings of IJCAI (pp. 1932–1938).

  • Platanios, E. A., Stretcu, O., Neubig, G., Póczos, B., & Mitchell, T. M. (2019). Competence-based curriculum learning for neural machine translation. In Proceedings of NAACL (pp. 1162–1172).

  • Portelas, R., Colas, C., Hofmann, K., & Oudeyer, PY. (2020a). Teacher algorithms for curriculum learning of deep RL in continuously parameterized environments. In Proceedings of CoRL (pp. 835–853).

  • Portelas, R., Romac, C., Hofmann, K., Oudeyer, P. Y. (2020b). Meta automatic curriculum learning. arXiv:2011.08463

  • Qin, W., Hu, Z., Liu, X., Fu, W., He, J., & Hong, R. (2020). The balanced loss curriculum learning. IEEE Access, 8, 25990–26001.

    Article  Google Scholar 

  • Qu, M., Tang, J., & Han, J. (2018). Curriculum learning for heterogeneous star network embedding via deep reinforcement learning. In Proceedings of WSDM (pp. 468–476).

  • Ranjan, S., & Hansen, J. H. (2017). Curriculum learning based approaches for noise robust speaker recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(1), 197–210.

    Article  Google Scholar 

  • Ravanelli, M., & Bengio, Y. (2018). Speaker recognition from raw waveform with SincNet. In Proceedings of SLT (pp. 1021–1028).

  • Ren, M., Zeng, W., Yang, B., & Urtasun, R. (2018a). Learning to reweight examples for robust deep learning. In Proceedings of ICML (pp. 4334–4343).

  • Ren, Y., Zhao, P., Sheng, Y., Yao, D., & Xu, Z. (2017). Robust softmax regression for multi-class classification with self-paced learning. In Proceedings of IJCAI (pp. 2641–2647).

  • Ren, Z., Dong, D., Li, H., & Chen, C. (2018). Self-paced prioritized curriculum learning with coverage penalty in deep reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 29(6), 2216–2226.

    Article  Google Scholar 

  • Richter, S., & DeCarlo, R. (1983). Continuation methods: Theory and applications. IEEE Transactions on Automatic Control, 28(6), 660–665.

    Article  MathSciNet  Google Scholar 

  • Ristea, N. C., & Ionescu, R. T. (2021). Self-paced ensemble learning for speech and audio classification. In Proceedings of INTERSPEECH (pp. 2836–2840).

  • Ristea, N. ., Miron, A. I., Savencu, O., Georgescu, M. I., Verga, N., Khan, F. S., & Ionescu, R. T. (2021). CyTran: Cycle-consistent transformers for non-contrast to contrast CT translation. arXiv:2110.06400

  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of MICCAI (pp. 234–241).

  • Ruiter, D., van Genabith, J., & España-Bonet, C. (2020). Self-induced curriculum learning in self-supervised neural machine translation. In Proceedings of EMNLP (pp. 2560–2571).

  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252.

    Article  MathSciNet  Google Scholar 

  • Sachan, M., & Xing, E. (2016). Easy questions first? A case study on curriculum learning for question answering. In Proceedings of ACL (pp. 453–463).

  • Sakaridis, C., Dai, D., & Gool, L. V. (2019). Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. In Proceedings of ICCV (pp. 7374–7383).

  • Sangineto, E., Nabi, M., Culibrk, D., & Sebe, N. (2018). Self paced deep learning for weakly supervised object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(3), 712–725.

    Article  Google Scholar 

  • Sarafianos, N., Giannakopoulos, T., Nikou, C., & Kakadiaris, I. A. (2017). Curriculum learning for multi-task classification of visual attributes. In Proceedings of ICCV Workshops (pp. 2608–2615).

  • Saxena, S., Tuzel, O., & DeCoste, D. (2019). Data parameters: A new family of parameters for learning a differentiable curriculum. In Proceedings of NeurIPS (pp. 11095–11105).

  • Shi, M., & Ferrari, V. (2016). Weakly supervised object localization using size estimates. In Proceedings of ECCV (pp. 105–121). Springer.

  • Shi, Y., Larson, M., & Jonker, C. M. (2015). Recurrent neural network language model adaptation with curriculum learning. Computer Speech & Language, 33(1), 136–154.

    Article  Google Scholar 

  • Shrivastava, A., Gupta, A., & Girshick, R. (2016). Training region-based object detectors with online hard example mining. In Proceedings of CVPR (pp. 761–769).

  • Shu, Y., Cao, Z., Long, M., & Wang, J. (2019). Transferable curriculum for weakly-supervised domain adaptation. In Proceedings of AAAI, vol. 33 (pp. 4951–4958).

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. In Proceedings of ICLR.

  • Sinha, S., Garg, A., & Larochelle, H. (2020). Curriculum by smoothing. In Proceedings of NeurIPS, vol. 33 (pp. 21653–21664).

  • Soviany, P. (2020). Curriculum learning with diversity for supervised computer vision tasks. In Proceedings of MRC (pp. 37–44).

  • Soviany, P., & Ionescu, RT. (2018). Frustratingly easy trade-off optimization between single-stage and two-stage deep object detectors. In Proceedings of CEFRL workshop of ECCV (pp. 366–378).

  • Soviany, P., Ardei, C., Ionescu, R. T., & Leordeanu, M. (2020). Image difficulty curriculum for generative adversarial networks (CuGAN). In Proceedings of WACV (pp. 3463–3472).

  • Soviany, P., Ionescu, R. T., Rota, P., & Sebe, N. (2021). Curriculum self-paced learning for cross-domain object detection. Computer Vision and Image Understanding, 204, 103166.

    Article  Google Scholar 

  • Spitkovsky, V. I., Alshawi, H., & Jurafsky, D. (2009). Baby steps: How “Less is More” in unsupervised dependency parsing. In Proceedings of NIPS workshop on grammar induction, representation of language and language learning.

  • Subramanian, S., Rajeswar, S., Dutil, F., Pal, C., & Courville, A. (2017). Adversarial generation of natural language. In Proceedings of the 2nd workshop on representation learning for NLP (pp. 241–251).

  • Sun, L., & Zhou, Y. (2020). FSPMTL: Flexible self-paced multi-task learning. IEEE Access, 8, 132012–132020.

    Article  Google Scholar 

  • Supancic, J. S., & Ramanan, D. (2013). Self-paced learning for long-term tracking. In Proceedings of CVPR (pp. 2379–2386).

  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of CVPR.

  • Tang, K., Ramanathan, V., Fei-Fei, L., & Koller, D. (2012a). Shifting weights: Adapting object detectors from image to video. In Proceedings of NIPS (pp. 638–646).

  • Tang, Y., Yang, Y. B., & Gao, Y. (2012b). Self-paced dictionary learning for image classification. In Proceedings of ACMMM (pp. 833–836).

  • Tang, Y., Wang, X., Harrison, AP., Lu, L., Xiao, J., & Summers, R. M. (2018). Attention-guided curriculum learning for weakly supervised classification and localization of thoracic diseases on chest radiographs. In Proceedings of MLMI (pp. 249–258).

  • Tang, Y. P., & Huang, S. J. (2019). Self-paced active learning: Query the right thing at the right time. In Proceedings of AAAI, vol. 33 (pp. 5117–5124).

  • Tay, Y., Wang, S., Luu, A. T., Fu, J., Phan, M. C., Yuan, X., Rao, J., Hui, S. C., & Zhang, A. (2019). Simple and effective curriculum pointer-generator networks for reading comprehension over long narratives. In Proceedings of ACL (pp. 4922–4931).

  • Tidd, B., Hudson, N., & Cosgun, A. (2020). Guided curriculum learning for walking over complex terrain. arXiv:2010.03848

  • Tsvetkov, Y., Faruqui, M., Ling, W., MacWhinney, B., & Dyer, C. (2016). Learning the curriculum with Bayesian optimization for task-specific word representation learning. In Proceedings of ACL (pp. 130–139).

  • Turchetta, M., Kolobov, A., Shah, S., Krause, A., & Agarwal, A. (2020). Safe reinforcement learning via curriculum induction. In Proceedings of NeurIPS, vol. 33 (pp. 12151–12162).

  • Wang, C., Wu, Y., Liu, S., Zhou, M., & Yang, Z. (2020a). Curriculum pre-training for end-to-end speech translation. In Proceedings of ACL (pp. 3728–3738).

  • Wang, J., Wang, X., & Liu, W. (2018). Weakly-and semi-supervised faster R-CNN with curriculum learning. In Proceedings of ICPR (pp. 2416–2421).

  • Wang, P., & Vasconcelos, N. (2018). Towards realistic predictors. In Proceedings of ECCV (pp. 36–51).

  • Wang, W., Caswell, I., & Chelba, C. (2019a). Dynamically composing domain-data selection with clean-data selection by “co-curricular learning” for neural machine translation. In Proceedings of ACL (pp. 1282–1292).

  • Wang, W., Tian, Y., Ngiam, J., Yang, Y., Caswell, I., & Parekh, Z. (2020b). Learning a multi-domain curriculum for neural machine translation. In Proceedings of ACL (pp. 7711–7723).

  • Wang, X., Chen, Y., & Zhu, W. (2021). A survey on curriculum learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3069908

    Article  Google Scholar 

  • Wang, Y., Gan, W., Yang, J., Wu, W., & Yan, J. (2019b). Dynamic curriculum learning for imbalanced data classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 5017–5026).

  • Wei, D., Lim, J. J., Zisserman, A., & Freeman, W. T. (2018). Learning and using the arrow of time. In Proceedings of CVPR (pp. 8052–8060).

  • Wei, J., Suriawinata, A., Ren, B., Liu, X., Lisovsky, M., Vaickus, L., Brown, C., Baker, M., Nasir-Moin, M., Tomita, N., et al. (2020). Learn like a pathologist: Curriculum learning by annotator agreement for histopathology image classification. arXiv:2009.13698

  • Weinshall, D., & Cohen, G. (2018). Curriculum learning by transfer learning: Theory and experiments with deep networks. In Proceedings of ICML, vol. 80 (pp. 5235–5243).

  • Wu, H., Xiao, B., Codella, N., Liu, M., Dai, X., Yuan, L., & Zhang, L. (2021). CvT: Introducing convolutions to vision transformers. arXiv:2103.15808

  • Wu, L., Tian, F., Xia, Y., Fan, Y., Qin, T., Jian-Huang, L., & Liu, T. Y. (2018). Learning to teach with dynamic loss functions. In Proceedigns of NeurIPS, vol. 31 (pp. 6466–6477).

  • Xu, B., Zhang, L., Mao, Z., Wang, Q., Xie, H., & Zhang, Y. (2020). Curriculum learning for natural language understanding. In Proceedings of ACL (pp. 6095–6104).

  • Xu, C., Tao, D., & Xu, C. (2015). Multi-view self-paced learning for clustering. In Proceedings of IJCAI (pp. 3974–3980).

  • Yang, L., Balaji, Y., Lim, S., & Shrivastava, A. (2020). Curriculum manager for source selection in multi-source domain adaptation. In Proceedings of ECCV, vol. 12359 (pp. 608–624).

  • Yu, Q., Ikami, D., Irie, G., & Aizawa, K. (2020). Multi-task curriculum framework for open-set semi-supervised learning. In Proceedings of ECCV (pp. 438–454).

  • Zaremba, W., & Sutskever, I. (2014). Learning to execute. arXiv:1410.4615

  • Zhan, R., Liu, X., Wong, D. F., & Chao, L. S. (2021). Meta-curriculum learning for domain adaptation in neural machine translation. In Proceedings of AAAI, vol. 35 (pp. 14310–14318).

  • Zhang, B., Wang, Y., Hou, W., Wu, H., Wang, J., Okumura, M., & Shinozaki, T. (2021a). FlexMatch: Boosting semi-supervised learning with curriculum pseudo labeling. In Proceedings of NeurIPS vol. 34.

  • Zhang, D., Meng, D., Li, C., Jiang, L., Zhao, Q., & Han, J. (2015a). A self-paced multiple-instance learning framework for co-saliency detection. In Proceedings of ICCV (pp. 594–602).

  • Zhang, D., Yang, L., Meng, D., Xu, D., & Han, J. (2017a). SPFTN: A self-paced fine-tuning network for segmenting objects in weakly labelled videos. In Proceedings of CVPR (pp. 4429–4437).

  • Zhang, D., Han, J., Zhao, L., & Meng, D. (2019). Leveraging prior-knowledge for weakly supervised object detection under a collaborative self-paced curriculum learning framework. International Journal of Computer Vision, 127(4), 363–380.

    Article  MATH  Google Scholar 

  • Zhang, J., Xu, X., Shen, F., Lu, H., Liu, X., & Shen, H. T. (2021). Enhancing audio-visual association with self-supervised curriculum learning. In Proceedings of AAAI, vol. 35 (pp. 3351–3359).

  • Zhang, M., Yu, Z., Wang, H., Qin, H., Zhao, W., & Liu, Y. (2019). Automatic digital modulation classification based on curriculum learning. Applied Sciences, 9(10), 2171.

    Article  Google Scholar 

  • Zhang, S., Zhang, X., Zhang, W., & Søgaard, A. (2020a). Worst-case-aware curriculum learning for zero and few shot transfer. arXiv:2009.11138

  • Zhang, W., Wei, W., Wang, W., Jin, L., & Cao, Z. (2021c). Reducing BERT computation by padding removal and curriculum learning. In Proceedings of ISPASS (pp. 90–92).

  • Zhang, X., Zhao, J., & LeCun, Y. (2015b). Character-level convolutional networks for text classification. In Proceedings of NIPS (pp. 649–657).

  • Zhang, X., Kumar, G., Khayrallah, H., Murray, K., Gwinnup, J., Martindale, M. J., McNamee, P., Duh, K., & Carpuat, M. (2018). An empirical exploration of curriculum learning for neural machine translation. arXiv:1811.00739

  • Zhang, X., Shapiro, P., Kumar, G., McNamee, P., Carpuat, M., & Duh, K. (2019c). Curriculum learning for domain adaptation in neural machine translation. In Proceedings of NAACL (pp. 1903–1915).

  • Zhang, X. L., & Wu, J. (2013). Denoising deep neural networks based voice activity detection. In Proceedings of ICASSP (pp. 853–857).

  • Zhang, Y., David, P., & Gong, B. (2017b). Curriculum domain adaptation for semantic segmentation of urban scenes. In Proceedings of ICCV (pp. 2020–2030).

  • Zhang, Y., Abbeel, P., & Pinto, L. (2020b). Automatic curriculum learning through value disagreement. In Proceedings of NeurIPS, vol. 33.

  • Zhao, M., Wu, H., Niu, D., & Wang, X. (2020a). Reinforced curriculum learning on pre-trained neural machine translation models. In Proceedings of AAAI (pp. 9652–9659).

  • Zhao, Q., Meng, D., Jiang, L., Xie, Q., Xu, Z., & Hauptmann, A. G. (2015). Self-paced learning for matrix factorization. In Proceedings of AAAI vol. 3 (p. 4).

  • Zhao, R., Chen, X., Chen, Z., & Li, S. (2020b). EGDCL: An adaptive curriculum learning framework for unbiased glaucoma diagnosis. In Proceedings of ECCV (pp. 190–205).

  • Zhao, Y., Wang, Z., & Huang, Z. (2021). Automatic curriculum learning with over-repetition penalty for dialogue policy learning. In Proceedings of AAAI, vol. 35 (pp. 14540–14548).

  • Zheng, S., Liu, G., Suo, H., & Lei, Y. (2019). Autoencoder-based semi-supervised curriculum learning for out-of-domain speaker verification. In Proceedings of INTERSPEECH (pp. 4360–4364).

  • Zheng, W., Zhu, X., Wen, G., Zhu, Y., Yu, H., & Gan, J. (2020). Unsupervised feature selection by self-paced learning regularization. Pattern Recognition Letters, 132, 4–11.

    Article  Google Scholar 

  • Zhou, S., Wang, J., Meng, D., Xin, X., Li, Y., Gong, Y., & Zheng, N. (2018). Deep self-paced learning for person re-identification. Pattern Recognition, 76, 739–751.

  • Zhou, T., & Bilmes, J. (2018). Minimax curriculum learning: Machine teaching with desirable difficulties and scheduled diversity. In Proceedings of ICLR.

  • Zhou, T., Wang, S., & Bilmes, J. A. (2020a). Curriculum learning by dynamic instance hardness. In Proceedings of NeurIPS, vol. 33.

  • Zhou, Y., Yang, B., Wong, D. F., Wan, Y., & Chao, L. S. (2020b). Uncertainty-aware curriculum learning for neural machine translation. In Proceedings of ACL (pp. 6934–6944).

  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of ICCV (pp. 2223–2232).

  • Zhu, X., Su, W., Lu, L., Li, B., Wang, X., & Dai, J. (2020). Deformable detr: Deformable transformers for end-to-end object detection. In Proceedings of ICLR.

Download references

Acknowledgements

The authors would like to thank the reviewers for their useful feedback.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Radu Tudor Ionescu.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Communicated by Julien Mairal.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by a Grant of the Romanian Ministry of Education and Research, CNCS - UEFISCDI, Project Number PN-III-P1-1.1-TE-2019-0235, within PNCDI III. This article has also benefited from the support of the Romanian Young Academy, which is funded by Stiftung Mercator and the Alexander von Humboldt Foundation for the period 2020–2022. This work was also supported by European Union’s Horizon 2020 research and innovation programme under Grant No. 951911 - AI4Media.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Soviany, P., Ionescu, R.T., Rota, P. et al. Curriculum Learning: A Survey. Int J Comput Vis 130, 1526–1565 (2022). https://doi.org/10.1007/s11263-022-01611-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-022-01611-x

Keywords

Mathematics Subject Classification

Navigation