Texture Classification with Patch Autocorrelation Features
Recently, a novel approach of capturing the autocorrelation of an image termed Patch Autocorrelation Features (PAF) was proposed. The PAF approach was successfully evaluated in a series of handwritten digit recognition experiments on the popular MNIST data set. However, the PAF representation has limited applications, because it is not invariant to affine transformations. In this work, the PAF approach is extended to become invariant to image transformations such as translation and rotation changes. First, several features are extracted from each image patch taken at a regular interval. Based on these features, a vector of similarity values is computed between each pair of patches. Then, the similarity vectors are clustered together such that the spatial offset between the patches of each pair is roughly the same. Finally, the mean and the standard deviation of each similarity value are computed for each group of similarity vectors. These statistics are concatenated in a feature vector called Translation and Rotation Invariant Patch Autocorrelation Features (TRIPAF). The TRIPAF vector essentially records information about the repeating patterns within an image at various spatial offsets. Several texture classification experiments are conducted on the Brodatz data set to evaluate the TRIPAF approach. The empirical results indicate that TRIPAF can improve the performance by up to \(10\,\%\) over a system that uses the same features, but extracts them from entire images. Furthermore, state of the art accuracy rates are obtained when the TRIPAF approach is combined with a scale invariant model, namely a bag of visual words model based on SIFT features.
KeywordsPatch-based method Texture classification Rotation invariance Translation invariance Kernel method Brodatz
Dan Popescu has been funded by the National Research Program STAR, project 71/2013: Multisensory robotic system for aerial monitoring of critical infrastructure systems - MUROS. Andreea-Lavinia Popescu has been supported through the Financial Agreement POSDRU/159/1.5/S/134398.
- 3.Brodatz, P.: Textures: A Photographic Album For Artists And Designers. Dover Pictorial Archives. Dover Publications, New York (1966)Google Scholar
- 5.Csurka, G., Dance, C.R., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. In: Workshop on Statistical Learning in Computer Vision, ECCV, pp. 1–22 (2004)Google Scholar
- 6.Deselaers, T., Keyser, D., Ney, H.: Discriminative training for object recognition using image patches. In: Proceedings of CVPR, pp. 157–162 (2005)Google Scholar
- 11.Ionescu, R.T., Popescu, A.L., Popescu, D.: Patch autocorrelation features for optical character recognition. In: Proceedings of VISAPP, March 2015Google Scholar
- 12.Ionescu, R.T., Popescu, A.L., Popescu, M.: Texture classification with the PQ kernel. In: Proceedings of WSCG (2014)Google Scholar
- 13.Ionescu, R.T., Popescu, M., Grozea, C.: Local learning to improve bag of visual words model for facial expression recognition. In: Workshop on Challenges in Representation Learning, ICML (2013)Google Scholar
- 14.Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of NIPS, pp. 1106–1114 (2012)Google Scholar
- 17.Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of ICCV, vol. 2, pp. 1150–1157 (1999)Google Scholar
- 18.Nguyen, H.G., Fablet, R., Boucher, J.M.: Visual textures as realizations of multivariate log-Gaussian Cox processes. In: Proceedings of CVPR, pp. 2945–2952 (2011)Google Scholar
- 20.Popescu, A.L., Popescu, D., Ionescu, R.T., Angelescu, N., Cojocaru, R.: Efficient fractal method for texture classification. In: Proceedings of ICSCS (2013)Google Scholar