Regression via Logic Supervised Classification
An approach to the restoration of dependences (regressions) is proposed that is based on solving problems of supervised classification. The main task is finding the optimal partitioning of the range of values of dependent variable on a finite number of intervals. It is necessary to find optimal number of change-points and their positions. This task is formulated as search and application of piece-wise constant function. When restoring piecewise constant functions, the problem of local discrete optimization using a model of logic supervised classification in leave –one-out mode is solved. The value of the dependent value is calculated in two steps. At first, the problem of classification of feature vector is solved. Further, the dependent variable is calculated as half of the sum of change-points values of the corresponding class.
Keywordsregression supervised classification discrete optimization approximation
- 1.Draper, N., Smith, H.: Applied regression analysis. John Wiley & Sons, New York (1966)Google Scholar
- 3.Collobert, R., Bengio, S.: Support Vector Machines for Large-Scale Regression Problems. Journal of Machine Learning Research 1, 9/1/, 143–160 (2001)Google Scholar
- 5.Hutter, M.: Bayesian Regression of Piecewise Constant Functions. Technical Report IDSIA-14-05, Galleria 2, CH-6928 Manno-Lugano, Switzerland (2005)Google Scholar
- 6.Janssen, F., Fyurnkranz, J.: Heuristic Rule-Based Regression via Dynamic Reduction to Classification. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI 2011), pp. 1330–1335 (2011)Google Scholar
- 9.Zhuravlev, Y.: Selected Scientific Publications, p. 420. M. Magistr Publishing (1998)Google Scholar