Skip to main content

Research on the Effectiveness of Individualized Feedback on Voting Behavior

  • Conference paper
  • First Online:
Pacific Rim Objective Measurement Symposium (PROMS) 2016 Conference Proceedings
  • 218 Accesses

Abstract

This study investigates the effectiveness of individualized feedback to voters from the perspective of changes in decision-making. We conducted a two-phase-based study: In the first phase, 163 participates with different backgrounds were invited to do mock voting for two presidential candidates according to the candidates’ performance after three debates, and to write and rank order six reasons for their votes. In the second phase, participants received feedback including the results of a Rasch Model analysis of reasons for voting. Shortly after reading the analysis, voters were asked to respond to a new set of measurment items, and then do another mock vote. A detailed comparative analysis of voters’ protocols before and after feedback indicates that the feedback is useful in helping voters make decisions. That is, they incorporated the feedback into their voting. This study sheds further light on their rating behavior. However, the feedback impacts individuals differently. Some raters adjust their decisions, while the others remain unchanged.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Alvarez, R. M. (1998). Information and elections. Michigan Studies in Political Analysis, 1999, 288.

    Google Scholar 

  • Andersen, E. B. (1997). Handbook of modern item response theory. New York: Springer.

    Chapter  Google Scholar 

  • Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43(4), 561–573.

    Article  Google Scholar 

  • Ansolabehere, S., Behr, R., & Iyengar, S. (1993). The media game: American politics in the television age. Maxwell Macmillan Canada: Macmillan.

    Google Scholar 

  • Beck, P. A., Dalton, R. J., Greene, S., & Huckfeldt, R. (2002). The Social Calculus of Voting: Interpersonal, Media, and Organizational Influences on Presidential Choices. American political science review 96(1):57–73.

    Article  Google Scholar 

  • Beck, P., Dalton, R. J., Greene, S., & Huckfeldt, R. (2002). The social calculus of voting: interpersonal, media, and organizational influences on presidential choices. American Political Science Review, 96(1), 57–73.

    Article  Google Scholar 

  • Bélanger, E., & Meguid, B. M.(2008). Issue salience, issue ownership, and issue-based vote choices salient. Electoral Studies, 27(3), 477–491.

    Article  Google Scholar 

  • Berelson, B. R., Lazarsfeld, P. F., & McPhee, W. N. (1954). Voting: A study of opinion formation in a presidential campaign. Contemporary Accounting Research , 21(1), 55–82.

    Google Scholar 

  • Bilodeau, A. (2006) Non-response error versus measurement error: A dilemma when using mail questionnaires for election studies.Australian Journal of Political Science, 41(1), 107–118.

    Article  Google Scholar 

  • Bond, T. G., & Fox, C. M. (2001). Applying the Rasch model: Fundamental measurement in the human sciences. NJ: Lawrence Erbaum Associates.

    Google Scholar 

  • Bond, T. G., & Fox, C. M. (2007). Applying the Rasch model: Fundamental measurement in the human sciences (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Dan Witters. (2016). Which issues are the most critical for Trump, Clinton? Gallup. Frank Newport and Lydia Saad

    Google Scholar 

  • Druckman, J. N. (2004). Priming the vote: Campaign effect in a US senate election. Political Psychology, 25(4), 577–594.

    Article  Google Scholar 

  • Franks, A. S., & Scherr, K. C. (2015). Using moral foundations to predict voting behavior: Regression models from the 2012 U.S. presidential election. Social Issues and Public Policy, 12(1), 213–232.

    Article  Google Scholar 

  • Johnston, R., Blais, A., Brady, H. E., & Crete, J. (1992). Letting the people decide: Dynamics of a Canadian election. Stanford, CA: Stanford University.

    Google Scholar 

  • Knoch, U. (2011). Investigating the effectiveness of individualized feedback to rating behavior—A longitudinal study. Language Testing, 28(2), 179–200.

    Article  Google Scholar 

  • Lenz, G. S. (2009). Learning and opinion change, not priming: Reconsidering the priming hypothesis. American Journal of Political Science, 53(4), 821–837.

    Article  Google Scholar 

  • Llewellyn, A., Skevington, S., Llewellyn, A. M., & Skevington, S. M. (2016). Evaluating a new methodology for providing individualized feedback in healthcare on quality of life and its importance, using the WHOQOL-BREF in a community population. Quality of Life Research, 25(3), 605–614. 10.

    Article  Google Scholar 

  • Linacre, J. M. (1999). Investigating rating scale category utility. Journal of Outcome Measurement, 3(2), 103.

    Google Scholar 

  • McNamara, T. (1996). Measuring second language performance. Modern Language Journal, 82(4), 591.

    Google Scholar 

  • Rasch, G. (1961). On general laws and the meaning of measurement in psychology. In Proceedings of the Fourth Berkeleypp (pp. 321–334).

    Google Scholar 

  • Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen: Institute of Educational Research (Expanded edition, 1980), Chicago: The University of Chicago Press).

    Google Scholar 

  • Royal, K. D., Ellis, A., Ensslen, A., & Homan, A., (2010). Rating scale optimization in survey research: An application of the Rasch rating scale model. Journal of Applied Quantitative Methods, 5(4), 607–617.

    Google Scholar 

  • Weigle, S. C. (2002). Assessing writing. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Winzenberg, T., Oldenburg, B., Frendin, S., De Wit, L., Riley, M., Jones, G. (2006). The effect on behavior and bone mineral density of individualized bone mineral density feedback and educational interventions in premenopausal women: A randomized controlled trial. BMC Public Health, 6(1), 12.

    Google Scholar 

  • Wright, B. D., & Stone, M. H. (1979). Best test design. Chicago: Mesa Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianghong Han .

Editor information

Editors and Affiliations

Appendices

Appendix: Example Feedback Report for Rater X

  • Report on Individual Feedback Information of Voter X

Your Individual Feedback information is shown as following:

  • (1) Observed count

You did not use categories 1 of the rating scale like some respondents did in the group. In this way we almost have to collapsing these categories. In order to improve the measurement quality, we intent to invite you to take part in another mock voting.

  • (2) Consistency

Score internal consistency refers to the relative consistency of your review scores, namely whether your score presents a certain pattern. The index is weighted mean square(infit). If the rater displayed an infit, that means that square value is above 1.4 (Infit), the values <0.8 means that the rater is identified as rating with too little variation. That the values is between (0.8, 1.4) belongs to the normal range, a value of 1 shows that the data and the statistical model fitting is good.

Your infit value is 0.79. That means that a further investigation was necessary to ascertain whether the lack of variation was achieved by a central tendency effect. In another word, you only used same categories for both candidates. Do not feel scare to use a large variety

  • (3) Bias

Bias refers to any individual biases raters in relation to the rating scale criteria. It means the tendency of a measurement process to over- or under-estimate the value of a population parameter. Raters were considered to have a significant bias if they displayed a z-score above +2 or below −2.

In your case, your rating was slightly harsh. Only remember this feedback when you rate again.

Overall Evaluation

Overall, your ratings were reasonable. When compared to the other raters in the group. You rated consistently and made good use of the rating criteria. However, you were rating slightly severe. Next time you rate, remember this feedback when you have trouble deciding between two rating scale points.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, C., Han, J. (2018). Research on the Effectiveness of Individualized Feedback on Voting Behavior. In: Zhang, Q. (eds) Pacific Rim Objective Measurement Symposium (PROMS) 2016 Conference Proceedings. Springer, Singapore. https://doi.org/10.1007/978-981-10-8138-5_16

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-8138-5_16

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-8137-8

  • Online ISBN: 978-981-10-8138-5

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics