Cantonese porcelain classification and image synthesis by ensemble learning and generative adversarial network
- 4 Downloads
Accurate recognition of modern and traditional porcelain styles is a challenging issue in Cantonese porcelain management due to the large variety and complex elements and patterns. We propose a hybrid system with porcelain style identification and image recreation modules. In the identification module, prediction of an unknown porcelain sample is obtained by logistic regression of ensembled neural networks of top-ranked design signatures, which are obtained by discriminative analysis and transformed features in principal components. The synthesis module is developed based on a conditional generative adversarial network, which enables users to provide a designed mask with porcelain elements to generate synthesized images of Cantonese porcelain. Experimental results of 603 Cantonese porcelain images demonstrate that the proposed model outperforms other methods relative to precision, recall, area under curve of receiver operating characteristic, and confusion matrix. Case studies on image creation indicate that the proposed system has the potential to engage the community in understanding Cantonese porcelain and promote this intangible cultural heritage.
Key wordsCantonese porcelain Classification Generative adversarial network Creative arts
Unable to display preview. Download preview PDF.
- Bao H, Liang Y, Liu HZ, et al., 2010. A novel algorithm for extraction of the scripts part in traditional Chinese painting images. Proc 2nd Int Conf on Software Technology and Engineering, p.V2-26–V2-30. https://doi.org/10.1109/ICSTE.2010.5608756
- Buades A, Coll B, Morel JM, 2005. A non-local algorithm for image denoising. IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.60–65. https://doi.org/10.1109/CVPR.2005.38
- Chen KH, 2019. Image Operations with cGAN. http://www.k4ai.com/imageops/index.html
- China Intangible Cultural Heritage Network, 2008. Cantonese Porcelain Inheritance Project. http://www.ihchina.cn/project_details/14453/ [Accessed on July 16, 2019] (in Chinese).
- Efros AA, Freeman WT, 2001. Image quilting for texture synthesis and transfer. Proc 28th Annual Conf on Computer Graphics and Interactive Techniques, p.341–346. https://doi.org/10.1145/383259.383296
- El Hattami A, Pierre-Doray É, Barsalou Y, 2019. Background removal using U-net, GAN and image matting. https://github.com/eti-p-doray/unet-gan-matting [Accessed on July 14, 2019].
- Goodfellow IJ, Pouget-Abadie J, Mirza M, et al., 2014. Generative adversarial nets. Proc 27th Int Conf on Neural Information Processing Systems, p.2672–2680.Google Scholar
- Hertzmann A, Jacobs CE, Oliver N, et al., 2001. Image analogies. Proc 28th Annual Conf on Computer Graphics and Interactive Techniques, p.327–340. https://doi.org/10.1145/383259.383295
- Isola P, Zhu JY, Zhou TH, et al., 2017. Image-to-image translation with conditional adversarial networks. IEEE Conf on Computer Vision and Pattern Recognition, p.1125–1134. https://doi.org/10.1109/CVPR.2017.632
- Kira K, Rendell LA, 1992. A practical approach to feature selection. Machine Learning Proc, p.249–256. https://doi.org/10.1016/B978-1-55860-247-2.50037-1 CrossRefGoogle Scholar
- Lecoutre A, Négrevergne B, Yger F, 2017. Recognizing art style automatically in painting with deep learning. Proc 9th Asian Conf on Machine Learning, p.327–342.Google Scholar
- Li WB, Zhang PC, Zhang L, et al., 2019. Object-driven text-to-image synthesis via adversarial training. https://arxiv.org/abs/1902.10740
- Yu L, Liu H, 2003. Feature selection for high-dimensional data: a fast correlation-based filter solution. Proc 20th Int Conf on Machine Learning, p.856–863.Google Scholar
- Zhu JY, Park T, Isola P, et al., 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. IEEE Int Conf on Computer Vision, p.2223–2232. https://doi.org/10.1109/ICCV.2017.244
- Zujovic J, Gandy L, Friedman S, et al., 2009. Classifying paintings by artistic genre: an analysis of features & classifiers. IEEE Int Workshop on Multimedia Signal Processing, p.1–5. https://doi.org/10.1109/MMSP.2009.5293271