Cantonese porcelain classification and image synthesis by ensemble learning and generative adversarial network

  • Steven Szu-Chi Chen
  • Hui Cui
  • Ming-han Du
  • Tie-ming Fu
  • Xiao-hong Sun
  • Yi JiEmail author
  • Henry Duh


Accurate recognition of modern and traditional porcelain styles is a challenging issue in Cantonese porcelain management due to the large variety and complex elements and patterns. We propose a hybrid system with porcelain style identification and image recreation modules. In the identification module, prediction of an unknown porcelain sample is obtained by logistic regression of ensembled neural networks of top-ranked design signatures, which are obtained by discriminative analysis and transformed features in principal components. The synthesis module is developed based on a conditional generative adversarial network, which enables users to provide a designed mask with porcelain elements to generate synthesized images of Cantonese porcelain. Experimental results of 603 Cantonese porcelain images demonstrate that the proposed model outperforms other methods relative to precision, recall, area under curve of receiver operating characteristic, and confusion matrix. Case studies on image creation indicate that the proposed system has the potential to engage the community in understanding Cantonese porcelain and promote this intangible cultural heritage.

Key words

Cantonese porcelain Classification Generative adversarial network Creative arts 

CLC number



Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



  1. Bao H, Liang Y, Liu HZ, et al., 2010. A novel algorithm for extraction of the scripts part in traditional Chinese painting images. Proc 2nd Int Conf on Software Technology and Engineering, p.V2-26–V2-30.
  2. Buades A, Coll B, Morel JM, 2005. A non-local algorithm for image denoising. IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.60–65.
  3. Chen KH, 2019. Image Operations with cGAN.
  4. China Intangible Cultural Heritage Network, 2008. Cantonese Porcelain Inheritance Project. [Accessed on July 16, 2019] (in Chinese).
  5. Cochran WG, 1954. Some methods for strengthening the common χ2 tests. Int Biom Soc, 10(4):417–451. MathSciNetzbMATHGoogle Scholar
  6. Dirvanauskas D, Maskeliūnas R, Raudonis V, et al., 2019. HEMIGEN: human embryo image generator based on generative adversarial networks. Sensors, 19(16):3578. CrossRefGoogle Scholar
  7. Efros AA, Freeman WT, 2001. Image quilting for texture synthesis and transfer. Proc 28th Annual Conf on Computer Graphics and Interactive Techniques, p.341–346.
  8. El Hattami A, Pierre-Doray É, Barsalou Y, 2019. Background removal using U-net, GAN and image matting. [Accessed on July 14, 2019].
  9. Emami H, Dong M, Nejad-Davarani SP, et al., 2018. Generating synthetic CTs from magnetic resonance images using generative adversarial networks. Med Phys, 45(8): 3627–3636. CrossRefGoogle Scholar
  10. Goodfellow IJ, Pouget-Abadie J, Mirza M, et al., 2014. Generative adversarial nets. Proc 27th Int Conf on Neural Information Processing Systems, p.2672–2680.Google Scholar
  11. Hertzmann A, Jacobs CE, Oliver N, et al., 2001. Image analogies. Proc 28th Annual Conf on Computer Graphics and Interactive Techniques, p.327–340.
  12. Iizuka S, Simo-Serra E, Ishikawa H, 2016. Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Trans Graph, 35(4), Article 110. CrossRefGoogle Scholar
  13. Isola P, Zhu JY, Zhou TH, et al., 2017. Image-to-image translation with conditional adversarial networks. IEEE Conf on Computer Vision and Pattern Recognition, p.1125–1134.
  14. Ji Y, Tan P, Chen SC, et al., 2019. Kansei engineering for E-commerce Cantonese porcelain selection in China. 21st Int Conf on Human-Computer Interaction, p.463–474. CrossRefGoogle Scholar
  15. Jiang SQ, Huang QM, Ye QX, et al., 2006. An effective method to detect and categorize digitized traditional Chinese paintings. Patt Recogn Lett, 27(7):734–746. CrossRefGoogle Scholar
  16. Kira K, Rendell LA, 1992. A practical approach to feature selection. Machine Learning Proc, p.249–256. CrossRefGoogle Scholar
  17. Kurin R, 2004. Safeguarding intangible cultural heritage in the 2003 UNESCO convention: a critical appraisal. Museum Int, 56(1–2):66–77. CrossRefGoogle Scholar
  18. Larsson G, Maire M, Shakhnarovich G, 2016. Learning representations for automatic colorization. Proc 14th European Conf on Computer Vision, p.577–593. CrossRefGoogle Scholar
  19. Lecoutre A, Négrevergne B, Yger F, 2017. Recognizing art style automatically in painting with deep learning. Proc 9th Asian Conf on Machine Learning, p.327–342.Google Scholar
  20. Li WB, Zhang PC, Zhang L, et al., 2019. Object-driven text-to-image synthesis via adversarial training.
  21. Lin TY, Maire M, Belongie S, et al., 2014. Microsoft COCO: common objects in context. Proc 13th European Conf on Computer Vision, p.740–755. CrossRefGoogle Scholar
  22. Liu YF, Qin ZC, Wan T, et al., 2018. Auto-painter: cartoon image generation from sketch by using conditional Wasserstein generative adversarial networks. Neurocomputing, 311:78–87. CrossRefGoogle Scholar
  23. Lowenthal D, 2005. Natural and cultural heritage. Int J Herit Stud, 11(1):81–92. CrossRefGoogle Scholar
  24. Mao XF, Wang SH, Zheng LY, et al., 2018. Semantic invariant cross-domain image generation with generative adversarial networks. Neurocomputing, 293:55–63. CrossRefGoogle Scholar
  25. Meng QY, Zhang HH, Zhou MQ, et al., 2018. The classification of traditional Chinese painting based on CNN. Proc 4th Int Conf on Cloud Computing and Security, p.232–241. CrossRefGoogle Scholar
  26. Połap D, Woźniak M, Wei W, et al., 2018. Multi-threaded learning control mechanism for neural networks. Fut Gener Comput Syst, 87:16–34. CrossRefGoogle Scholar
  27. Quinlan JR, 1986. Induction of decision trees. Mach Learn, 1(1):81–106. Google Scholar
  28. Smith L, Akagawa N, 2008. Intangible Heritage. Routledge, London, UK.CrossRefGoogle Scholar
  29. Yu L, Liu H, 2003. Feature selection for high-dimensional data: a fast correlation-based filter solution. Proc 20th Int Conf on Machine Learning, p.856–863.Google Scholar
  30. Zhang R, Isola P, Efros AA, 2016. Colorful image colorization. Proc 14th European Conf on Computer Vision, p.649–666. CrossRefGoogle Scholar
  31. Zhu JY, Park T, Isola P, et al., 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. IEEE Int Conf on Computer Vision, p.2223–2232.
  32. Zujovic J, Gandy L, Friedman S, et al., 2009. Classifying paintings by artistic genre: an analysis of features & classifiers. IEEE Int Workshop on Multimedia Signal Processing, p.1–5.

Copyright information

© Zhejiang University and Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer Science and Information TechnologyLa Trobe UniversityMelbourneAustralia
  2. 2.School of Art and DesignGuangdong University of TechnologyGuangzhouChina

Personalised recommendations