Abstract
In several areas of research, data collection in the usual sense may not be feasible or may not be required. Data—not the ones directly needed to study the underlying phenomenon—may be available in documents, artifacts, public pronouncements, images, newspaper reports or television serials and the like. Messages bearing on the research questions as are covertly contained in such data have to be deciphered, coded and then subjected to some form of analysis to draw relevant conclusions. This exercise is known as Content Analysis and has a great appeal to social scientists. Usually, more than one coder or referee or rater will be coding the data into some categories and agreement among the raters will be examined by using simple statistical tools. Once sufficient agreement among raters has been established, we proceed to test for relevant research hypotheses using statistical techniques appropriate to categorical data. The present chapter discusses several measures of concordance among raters applicable to different situations and provides their standard errors, so that observed values of the measures can be tested for their significance or otherwise. Several illustrations have been provided to facilitate application of techniques used in Content Analysis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References and Suggested Readings
Berelson, B. (1952). Content analysis in communication research. Glencoe III: Free Press.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.
Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378–382.
Fleiss, J. L. (1981). Statistical methods for rates and proportions. New York: Wiley.
Gwet, K. L. (2014). Handbook of inter-rater reliability (4th ed.). Gaithersburg: Advanced Analytics LLC.
Holsti, O. R. (1969). Content analysis for the social sciences and humanities. Reading, MA: Addison Wesley.
Krippendorff, K. (1980). Content analysis: an introduction to its methodology. London: Sage.
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174. Roberts, C. W. (Ed.). (1997). Text analysis for the social sciences: methods for drawing inferences from texts and transcripts. Mahwah, NJ: Lawrence Erlbaum Associates.
Scott, W. (1955). Reliability of content analysis: The case of nominal scale coding. Public Opinion Quarterly, 9(3), 321–325.
Sim, J., & Wright, C. C. (2005). The Kappa statistics in reliability studies: Use, interpretation and sample size requirement. Physical Therapy, 85(3), 257–268.
Weber, R. P. (1990). Basic content analysis. CA: Newbury Park.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Mukherjee, S.P., Sinha, B.K., Chattopadhyay , A. (2018). Content Analysis. In: Statistical Methods in Social Science Research. Springer, Singapore. https://doi.org/10.1007/978-981-13-2146-7_3
Download citation
DOI: https://doi.org/10.1007/978-981-13-2146-7_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-2145-0
Online ISBN: 978-981-13-2146-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)