Abstract
In psychometric sciences, a common problem is the choice of a good response scale. Every scale has, by its nature, a propensity to lead a respondent to mainly positive- or negative- ratings. This paper investigates possible causes of the discordance between two ordinal scales evaluating the same goods or services. In psychometric literature, Cohen’s Kappa is one of the most important index to evaluate the strength of agreement, or disagreement, between two nominal variables, in particular in its weighted version. In this paper, a new index is proposed. A proper procedure to determine the lower and upper triangle in a non-square table is also implemented, as to generalize the index in order to compare two scales with a different number of categories. A test is set up with the aim to verify the tendency of a scale to have a different rating compared to a different one. A study with real data is conducted.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Agresti, A. (2002). Categorical data analysis. Ney York: Wiley.
Bonanomi, A. (2004) Variabili ordinali e trasformazioni di scala, con particolare riferimento alla stima dei parametri dei modelli interpretativi con variabili latenti. Methodological and Applied Statistical, University of Milan Bicocca.
Cicchetti, D. V., & Allison, T. (1971). A new procedure for assessing reliability of scoring EEG sleep recording. The American Journal of EEG Technology, 11, 101–109.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
Cohen, J. (1968). Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bullettin, 70, 213–220.
Everitt, B. S. (1968). Moments of the statistics kappa and weighted kappa. British Journal of Mathematical and Statistical Psycology, 21, 97–103.
Fleiss, J. L., Cohen, J., & Everitt, B. S. (1969). Large sample standard errors of kappa and weighted kappa. Psychological Bullettin, 72, 323–327.
Horn, R. A., & Johnson, C. R. (1985). Matrix analysis. Cambridge: Cambridge University Press.
Hubert, L. J. (1978). A general formula for the variance of Cohen’s weighted kappa. Psychological Bullettin, 85(1), 183–184.
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174–327.
Vanbelle, S., & Albert, A. (2009). A note on the linearly weighted kappa coefficient for ordinal scales. Statistical Methodology, 6, 157–163.
Warrens, M. J. (2012). Some paradoxical results for the quadratically weighted kappa. Psychometrika, 77, 315–323.
Warrens, M. J. (2013). Cohen’s weighted kappa with additive weights. Advances Data Analysis Classification, 7, 41–55.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Bonanomi, A. (2014). A New Index for the Comparison of Different Measurement Scales. In: Vicari, D., Okada, A., Ragozini, G., Weihs, C. (eds) Analysis and Modeling of Complex Data in Behavioral and Social Sciences. Studies in Classification, Data Analysis, and Knowledge Organization. Springer, Cham. https://doi.org/10.1007/978-3-319-06692-9_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-06692-9_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-06691-2
Online ISBN: 978-3-319-06692-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)