Advertisement

Does audience matter? Comparing teachers’ and non-teachers’ application and perception of quality rubrics for evaluating Open Educational Resources

  • Min Yuan
  • Mimi ReckerEmail author
Research Article
  • 139 Downloads

Abstract

While many rubrics have been developed to guide people in evaluating the quality of Open Educational Resources (OER), few studies have empirically investigated how different people apply and perceive such rubrics. This study examines how participants (22 teachers and 22 non-teachers) applied three quality rubrics (comprised of a total of 17 quality indicators) to evaluate 20 OER, and how they perceived the utility of these rubrics. Results showed that both teachers and non-teachers found some indicators more difficult to apply, and displayed different response styles on different indicators. In addition, teachers gave higher overall ratings to OER, but non-teachers’ ratings had generally higher agreement values. Regarding rubric perception, both groups perceived these rubrics as useful in helping them find high-quality OER, but differed in their preferences for quality rubrics and indicators.

Keywords

Open Educational Resources Quality rubrics Audience Rubric perception Rubric application 

Notes

Acknowledgements

This research is partially supported by Utah State University. Portions of this research were previously presented at the American Educational Research Association Annual Meeting (AERA 2016) in Washington, DC. We thank Drs. Anne Diekema and Andy Walker for their valuable input.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. Abramovich, S., & Schunn, C. (2012). Studying teacher selection of resources in an ultra-large scale interactive system: Does metadata guide the way? Computers and Education, 58(1), 551–559.CrossRefGoogle Scholar
  2. Abramovich, S., Schunn, C. D., & Correnti, R. J. (2013). The role of evaluative metadata in an online teacher resource exchange. Educational Technology Research and Development, 61(6), 863–883.CrossRefGoogle Scholar
  3. Achieve. (2011). Rubrics for evaluating Open Education Resource (OER) objects. Washington, DC. http://www.achieve.org/publications/achieve-oer-rubrics.
  4. Andrade, H., & Du, Y. (2005). Student perspectives on rubric-referenced assessment. Practical Assessment, Research and Evaluation, 10(5), 1–11.Google Scholar
  5. Atkins, D. E., Brown, J. S., & Hammond, A. L. (2007). A review of the Open Educational Resources (OER) movement: Achievements, challenges, and new opportunities. http://www.hewlett.org/uploads/files/ReviewoftheOERMovement.pdf.
  6. Bolton, F. C. (2006). Rubrics and adult learners: Andragogy and assessment. Assessment Update, 18(3), 5–6.Google Scholar
  7. Brand-Gruwel, S., Wopereis, I., & Vermetten, Y. (2005). Information problem solving by experts and novices: Analysis of a complex cognitive skill. Computers in Human Behavior, 21(3), 487–508.CrossRefGoogle Scholar
  8. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.  https://doi.org/10.1191/1478088706qp063oa.CrossRefGoogle Scholar
  9. Bundsgaard, J., & Hansen, T. I. (2011). Evaluation of learning materials: A holistic framework. Journal of Learning Design, 4(4), 31–44.CrossRefGoogle Scholar
  10. Campbell, A. (2005). Application of ICT and rubrics to the assessment process where professional judgement is involved: The features of an e-marking tool. Assessment and Evaluation in Higher Education, 30(5), 529–537.CrossRefGoogle Scholar
  11. Chismar, W. G., & Wiley-Patton, S. (2003). Does the extended technology acceptance model apply to physicians? In Proceedings of the 36th annual Hawaii international conference on system sciences (pp. 8–15). Big Island, HI: Institute of Electrical and Electronics Engineers (IEEE).Google Scholar
  12. Chong, A., & Romkey, L. (2017). Testing inter-rater reliability in rubrics for large scale undergraduate independent projects. In Proceedings of the Canadian Engineering Education Association, Halifax, NS.Google Scholar
  13. Colton, A. B., & Sparks-Langer, G. M. (1993). A conceptual framework to guide the development of teacher reflection and decision making. Journal of Teacher Education, 44(1), 45–54.CrossRefGoogle Scholar
  14. Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage.Google Scholar
  15. Custard, M., & Sumner, T. (2005). Using machine learning to support quality judgments. D-Lib Magazine, 11(10). http://www.dlib.org/dlib/october05/custard/10custard.html.
  16. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly.  https://doi.org/10.2307/249008.Google Scholar
  17. Davis, E. A., & Krajcik, J. S. (2005). Designing educative curriculum materials to promote teacher learning. Educational Researcher, 34(3), 3–14.CrossRefGoogle Scholar
  18. Duncan, A. (2012). Arne Duncan introduces the why open education matters video competition. https://www.youtube.com/watch?v=8SdrhGrcvsk&feature=youtu.be.
  19. East, M. (2009). Evaluating the reliability of a detailed analytic scoring rubric for foreign language writing. Assessing Writing, 14(2), 88–115.CrossRefGoogle Scholar
  20. Fitzgerald, M., Lovin, V., & Branch, R. M. (2003). The gateway to educational materials: An evaluation of an online resource for teachers and an exploration of user behavior. Journal of Technology and Teacher Education, 11(1), 21–51.Google Scholar
  21. Gardner, R. C. (2001). Psychological statistics using SPSS for Windows. Upper Saddle River, NJ: Prentice Hall.Google Scholar
  22. Gligora Marković, M., Kliček, B., & Plantak Vukovac, D. (2014). The effects of multimedia learning materials quality on knowledge acquisition. In Proceedings of 23rd international conference on information systems development (pp. 140–149), Varaždin, Croatia.Google Scholar
  23. Hallgren, K. A. (2012). Computing interrater reliability for observational data: An overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8(1), 23–24.CrossRefGoogle Scholar
  24. Harris, J., Grandgenett, N., & Hofer, M. J. (2010). Testing a TPACK-based technology integration assessment rubric. Society for Information Technology and Teacher Education.Google Scholar
  25. Haughey, M., & Muirhead, B. (2005). Evaluating learning objects for schools. E-Journal of Instructional Sciences and Technology, 8(1). http://ascilite.org.au/ajet/e-jist/docs/vol8_no1/fullpapers/eval_learnobjects_school.htm.
  26. Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144.CrossRefGoogle Scholar
  27. Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect. Educational Psychologist, 38(1), 23–31.CrossRefGoogle Scholar
  28. Kalyuga, S., & Renkl, A. (2010). Expertise reversal effect and its instructional implications: Introduction to the special issue. Instructional Science, 38, 209–215.CrossRefGoogle Scholar
  29. Kammerer, Y., Bråten, I., Gerjets, P., & Strømsø, H. I. (2013). The role of Internet-specific epistemic beliefs in laypersons’ source evaluations and decisions during Web search on a medical issue. Computers in Human Behavior, 29, 1193–1203.CrossRefGoogle Scholar
  30. King, J. (2016). U.S. Department of Education recognizes 14 states and 40 districts committing to #GoOpen with educational resources. http://www.ed.gov/news/press-releases/us-department-education-recognizes-13-states-and-40-districts-committing-goopen-educational-resources.
  31. Kurilovas, E., Bireniene, V., & Serikoviene, S. (2011). Methodology for evaluating quality and reusability of learning objects. Electronic Journal of e-Learning, 9(1), 39–51.Google Scholar
  32. Leary, H., Recker, M., Walker, A., Recker, M., Wetzler, P., Sumner, T., & Martin, J. (2011). Automating Open Educational Resources assessments: A machine learning generalization study. In Proceedings of the joint conference on digital libraries (pp. 283–286). New York: ACM.Google Scholar
  33. Legris, P., Ingham, J., & Collerette, P. (2003). Why do people use information technology? A critical review of the technology acceptance model. Information and Management, 40(3), 191–204.CrossRefGoogle Scholar
  34. Livingston, C., & Borko, H. (1989). Expert-novice differences in teaching: A cognitive analysis and implications for teacher education. Journal of Teacher Education, 40(4), 36–42.CrossRefGoogle Scholar
  35. Maloch, B., Flint, A. S., Eldridge, D., Harmon, J., Loven, R., Fine, J. C., et al. (2003). Understandings, beliefs, and reported decision making of first-year teachers from different reading teacher preparation programs. The Elementary School Journal, 103, 431–457.CrossRefGoogle Scholar
  36. McMartin, F., McKenna, A., & Youssefi, K. (2000). Scenario assignments as assessment tools for undergraduate engineering education. IEEE Transactions on Education, 43(2), 111–119.CrossRefGoogle Scholar
  37. Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research and Evaluation, 7(10), 1–11.Google Scholar
  38. Murphy, A. (2013). Open educational practices in higher education: Institutional adoption and challenges. Distance Education, 34(2), 201–217.CrossRefGoogle Scholar
  39. Nesbit, J., Belfer, K., & Leacock, T. (2007). Learning object review instrument (LORI), version 1.5. http://www.transplantedgoose.net/gradstudies/educ892/LORI1.5.pdf.
  40. Newell, J. A., Dahm, K. D., & Newell, H. L. (2002). Rubric development and inter-rater reliability issues in assessing learning outcomes. Chemical Engineering Education, 36(3), 212–215.Google Scholar
  41. Oakleaf, M. (2009). Using rubrics to assess information literacy: An examination of methodology and interrater reliability. Journal of the American Society for Information Science and Technology, 60(5), 969–983.CrossRefGoogle Scholar
  42. Oksa, A., Kalyuga, S., & Chandler, P. (2010). Expertise reversal effect in using explanatory notes for readers of Shakespearean text. Instructional Science, 38, 217–236.CrossRefGoogle Scholar
  43. Opfer, D., Kaufman, J., & Thompson, L. (2017). Implementation of K-12 state standards for mathematics and English language arts and literacy: Findings from the American Teacher Panel. Santa Monica, CA: RAND Corporation. https://www.rand.org/pubs/research_reports/RR1529-1.html.
  44. Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5, 411–419.Google Scholar
  45. Parkes, K. A. (2006). The effect of performance rubrics on college-level applied studio grading (Order No. 3277079). Available from ProQuest Dissertations and Theses Global (305310537). http://ezproxy.lib.utah.edu/docview/305310537?accountid=14677.
  46. Porcello, D., & Hsi, S. (2013). Crowdsourcing and curating online education resources. Science, 341, 240–241. http://www.sciencemag.org/content/341/6143/240.full.
  47. Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment and Evaluation in Higher Education, 35(4), 435–448.CrossRefGoogle Scholar
  48. Reisslein, J., Atkinson, R. K., Seeling, P., & Reisslein, M. (2006). Encountering the expertise reversal effect with a computer-based environment on electrical circuit analysis. Learning and Instruction, 16(2), 92–103.CrossRefGoogle Scholar
  49. Remillard, J. T. (2005). Examining key concepts in research on teachers’ use of mathematics curricula. Review of Educational Research, 75(2), 211–246.CrossRefGoogle Scholar
  50. Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing Writing, 15(1), 18–39.CrossRefGoogle Scholar
  51. Richter, T., & McPherson, M. (2012). Open Educational Resources: Education for the world? Distance Education, 33(2), 201–219.CrossRefGoogle Scholar
  52. Saldaña, J. (2012). The coding manual for qualitative researchers. Thousand Oaks, CA: Sage.Google Scholar
  53. Shavelson, R. J., & Stem, P. (1981). Research on teachers’ pedagogical thoughts, judgments, decisions, and behavior. Review of Educational Research, 51, 455–498.CrossRefGoogle Scholar
  54. Shoukri, M. M., Asyali, M. H., & Donner, A. (2004). Sample size requirements for the design of reliability study: Review and new results. Statistical Methods in Medical Research, 13, 251–271.CrossRefGoogle Scholar
  55. Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14.CrossRefGoogle Scholar
  56. Stellmack, M. A., Konheim-Kalkstein, Y. L., Manor, J. E., Massey, A. R., & Schmitz, J. A. P. (2009). An assessment of reliability and validity of a rubric for grading APA-style introductions. Teaching of Psychology, 36(2), 102–107.CrossRefGoogle Scholar
  57. Sumner, T., Khoo, M., Recker, M., & Marlino, M. (2003). Understanding educator perceptions of “Quality” in digital libraries. In Proceedings of the joint conference on digital libraries (pp. 269–279). New York: ACM.Google Scholar
  58. United Nations Educational, Scientific and Cultural Organization, UNESCO. (2016). Open Educational Resources. http://www.unesco.org/new/en/communication-and-information/access-to-knowledge/open-educational-resources/.
  59. Van Vaerenbergh, Y., & Thomas, T. D. (2013). Response styles in survey research: A literature review of antecedents, consequences, and remedies. International Journal of Public Opinion Research, 25(2), 195–217.CrossRefGoogle Scholar
  60. Vargo, J., Nesbit, J. C., Belfer, K., & Archambault, A. (2003). Learning object evaluation: Computer-mediated collaboration and inter-rater reliability. International Journal of Computers and Applications, 25(3), 198–205.CrossRefGoogle Scholar
  61. Venkatesh, V., Morris, M. G., & Ackerman, P. L. (2000). A longitudinal field investigation of gender differences in individual technology adoption decision-making processes. Organizational Behavior and Human Decision Processes, 83(1), 33–60.CrossRefGoogle Scholar
  62. Wetzler, P., Bethard, S., Leary, H., Butcher, K., Zhao, D., Martin, J. S., et al. (2013). Characterizing and predicting the multi-faceted nature of quality in educational web resources. Transactions on Interactive Intelligent Systems, 3(3), 1–25.CrossRefGoogle Scholar
  63. Yuan, M., & Recker, M. (2015). Not all rubrics are equal: A review of rubrics for evaluating the quality of Open Educational Resources. International Review of Research in Open and Distance Learning, 16(5). http://www.irrodl.org/index.php/irrodl/article/view/2389.

Copyright information

© Association for Educational Communications and Technology 2018

Authors and Affiliations

  1. 1.University of UtahSalt Lake CityUSA
  2. 2.Department of Instructional Technology and Learning SciencesUtah State UniversityLoganUSA

Personalised recommendations