Joint Feature Transformation and Selection Based on Dempster-Shafer Theory
In statistical pattern recognition, feature transformation attempts to change original feature space to a low-dimensional subspace, in which new created features are discriminative and non-redundant, thus improving the predictive power and generalization ability of subsequent classification models. Traditional transformation methods are not designed specifically for tackling data containing unreliable and noisy input features. To deal with these inputs, a new approach based on Dempster-Shafer Theory is proposed in this paper. A specific loss function is constructed to learn the transformation matrix, in which a sparsity term is included to realize joint feature selection during transformation, so as to limit the influence of unreliable input features on the output low-dimensional subspace. The proposed method has been evaluated by several synthetic and real datasets, showing good performance.
KeywordsBelief functions Dempster-Shafer theory Feature transformation Feature selection Pattern classification
This work was partly supported by China Scholarship Council.
- 5.Goldberger, J., Roweis, S., Hinton, G., Salakhutdinov, R.: Neighbourhood components analysis. In: Advances in Neural Information Processing Systems, pp. 513–520 (2005)Google Scholar
- 9.Lian, C., Ruan, S., Dencœux, T., Li, H., Vera, P.: Dempster-Shafer theory based feature selection with sparse constraint for outcome prediction in cancer therapy. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 695–702. Springer, Heidelberg (2015)Google Scholar
- 14.Nguyen, T., Boukezzoula, R., Coquin, D., Perrin, S.: Combination of sugeno fuzzy system and evidence theory for NAO robot in colors recognition. In: 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–8 (2015)Google Scholar
- 17.Wang, F., Miron, A., Ainouz, S., Bensrhair, A.: Post-aggregation stereo matching method using Dempster-Shafer theory. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 3783–3787 (2014)Google Scholar