Abstract
It is known that current Learning Object Repositories adopt strategies for quality assessment of their resources that rely on the impressions of quality given by the members of the repository community. Although this strategy can be considered effective at some extent, the number of resources inside repositories tends to increase more rapidly than the number of evaluations given by this community, thus leaving several resources of the repository without any quality assessment. The present work describes the results of two experiments to automatically generate quality information about learning resources based on their intrinsic features as well as on evaluative metadata (ratings) available about them in MERLOT repository. Preliminary results point out the feasibility of achieving such goal which suggests that this method can be used as a starting point for the pursuit of automatically generation of internal quality information about resources inside repositories.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Although this limitation may affect the results, the process of collecting the information is extremely slow and such limitation was needed. In order to acquire the samples used in this study, the crawler kept running uninterruptedly for 2 (in 2009) and 4 (in 2010) full months.
- 2.
The so-called not-good group was formed by the union of the average group and the poor group.
- 3.
The difficulties for training, validating and testing predictive models for subsets with less than 40 resources would be more severe.
- 4.
Just some models were presented in the figure.
References
Vuorikari R, Manouselis N, Duval E (2008) Using metadata for storing, sharing and reusing evaluations for social recommendations: the case of learning resources. Social information retrieval systems: emerging technologies and applications for searching the web effectively. Idea Group, Hershey, PA, pp 87–107
Ochoa X, Duval E (2009) Quantitative analysis of learning object repositories. IEEE Trans Learn Technol 2(3):226–238
Ochoa X, Duval E (2008) Relevance ranking metrics for learning objects. IEEE Trans Learn Technol 1(1):34–48. doi:10.1109/TLT.2008.1, http://dx.doi.org/
Sanz-Rodriguez J, Dodero J, Sánchez-Alonso S (2010) Ranking learning objects through integration of different quality indicators. IEEE Trans Learn Technol 3(4):358–363. doi:10.1109/TLT.2010.23
Cechinel C, Sánchez-Alonso S, Sicilia M-Á (2009) Empirical analysis of errors on human-generated learning objects metadata. In: Sartori F, Sicilia MÁ, Manouselis N (eds) Metadata and semantic research, vol 46, Communications in computer and information science. Springer, Berlin, pp 60–70. doi:10.1007/978-3-642-04590-5_6
Cechinel C, Sánchez-Alonso S, García-Barriocanal E (2011) Statistical profiles of highly-rated learning objects. Comput Educ 57(1):1255–1269. doi:10.1016/j.compedu.2011.01.012
Cechinel C, Silva Camargo S, Sánchez-Alonso S, Sicilia M-Á (2012) On the search for intrinsic quality metrics of learning objects. In: Dodero J, Palomo-Duarte M, Karampiperis P (eds) Metadata and semantics research, Communications in computer and information science. Springer, Berlin, pp 49–60. doi:10.1007/978-3-642-35233-1_5
Meyer M, Hannappel A, Rensing C, Steinmetz R (2007) Automatic classification of didactic functions of e-learning resources. Paper presented at the Proceedings of the 15th international conference on multimedia, Augsburg, Germany
Mendes E, Hall W, Harrison R (1998) Applying metrics to the evaluation of educational hypermedia applications. J Univers Comput Sci 4(4):382–403. doi:10.3217/jucs-004-04-0382
Blumenstock JE (2008) Size matters: word count as a measure of quality on Wikipedia. Paper presented at the Proceedings of the 17th international conference on World Wide Web, Beijing, China
Stvilia B, Twidale MB, Smith LC, Gasser L (2005) Assessing information quality of a community-based encyclopedia. In: Proceedings of the international conference on information quality – ICIQ 2005, pp 442-454. Doi:citeulike-article-id:1833325
Ivory MY, Hearst MA (2002) Statistical profiles of highly-rated web sites. Changing our world, changing ourselves. Paper presented at the proceedings of the SIGCHI conference on Human factors in computing systems, Minneapolis, MA, 2002
Nesbit JC, Belfer K, Leacock T (2003) Learning object review instrument (LORI). E-learning research and assessment network. http://www.elera.net/eLera/Home/Articles/LORI%20manual
García-Barriocanal E, Sicilia M-Á (2009) Preliminary explorations on the statistical profiles of highly-rated learning objects. In: Sartori F, Sicilia MÁ, Manouselis N (eds) Metadata and semantic research, vol 46, Communications in computer and information science. Springer, Berlin, pp 108–117. doi:10.1007/978-3-642-04590-5_10
Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The WEKA data mining software: an update. SIGKDD Explor Newsl 11(1):10–18. doi:10.1145/1656274.1656278
Cichosz P (2011) Assessing the quality of classification models: performance measures and evaluation procedures. Cent Eur J Eng 1(2):132–158. doi:10.2478/s13531-011-0022-9
Xu L, Hoos HH, Leyton-Brown K (2007) Hierarchical hardness models for SAT. Paper presented at the Proceedings of the 13th international conference on principles and practice of constraint programming, Providence, RI
Hagan MT, Menhaj MB (1994) Training feedforward networks with the Marquardt algorithm. IEEE Trans Neural Netw 5(6):989–993. doi:10.1109/72.329697
Bishop CM (2006) Pattern recognition and machine learning, Information Science and Statistics. Springer, New York
Cechinel C, Camargo SdS, Ochoa X, Sánchez-Alonso S, Sicilia M-Á (2012a) Populating learning object repositories with hidden internal quality information. In: Manouselis N, Drachsler H, Verbert K, Santos OC (eds) Recommender systems in technology enhanced learning, CEUR workshop proceedings, Saarbrücken, pp 11–22
Cechinel C, Sánchez-Alonso S (2011) Analyzing associations between the different ratings dimensions of the MERLOT repository. Interdisciplinary Journal of E-Learning and Learning Objects 7:1–9
Acknowledgments
The work presented here has been partially funded by the European Commission through the project IGUAL (www.igualproject.org)—Innovation for Equality in Latin American University (code DCIALA/19.09.01/10/21526/245-315/ALFAIII (2010)123) of the ALFA III Programme, by Spanish Ministry of Science and Innovation through project MAVSEL: Mining, data analysis and visualization based in social aspects of e-learning (code TIN2010-21715-C02-01) and by CYTED (Ibero-American Programme for Science, Technology and Development) as part of project “RIURE - Ibero-American Network for the Usability of Learning Repositories “ (code 513RT0471).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this chapter
Cite this chapter
Cechinel, C., da Silva Camargo, S., Sánchez-Alonso, S., Sicilia, MÁ. (2014). Towards Automated Evaluation of Learning Resources Inside Repositories. In: Manouselis, N., Drachsler, H., Verbert, K., Santos, O. (eds) Recommender Systems for Technology Enhanced Learning. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-0530-0_2
Download citation
DOI: https://doi.org/10.1007/978-1-4939-0530-0_2
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-0529-4
Online ISBN: 978-1-4939-0530-0
eBook Packages: Computer ScienceComputer Science (R0)