A constrained clusterwise procedure for segmentation
A procedure for segmentation by a constrained hierarchical clustering algorithm is proposed, using a criterion (or response) variable X and k structural factors or predictors, which yields classes different mainly as to the (conditional) distributions of X, computed within each segment. Since the procedure works on combinations of factor levels (and only indirectly on individuals), the methodology can be employed even for very large populations, with no increase of computational complexity.
Key wordsSegmentation structural factors adjacency constrained classification
Unable to display preview. Download preview PDF.
- Aluja, T., Nafria, E. (1996). Robust impurity measures in decision trees, Proceedings of IFCS-96: Data Science, Classification and Related Methods, (Hayashi, C. et al., eds.), Springer Verlag, Tokio.Google Scholar
- Breiman, L., Friedman, J.H., Olshen R.A., Stone C.J. (1984). Classification and Regression Trees, Wadsworth International Group, Belmont, California.Google Scholar
- Celeux, G., Lechevallier, Y. (1982). Méthodes de Segmentation non Paramétriques, Revue de Statistique Appliquée, 4, 39–53.Google Scholar
- Quinlan, J.R. (1986). Induction of Decision Trees, Machine Learning, 1, 81–106.Google Scholar