Keywords

1 Introduction

This paper examines the assessment design process and how it can be supported for teaching practitioners in higher education who are developing blended learning materials. Furthermore, the paper presents the design and evaluation of a decision support tool which was developed to support such needs. It is based on findings from the EQUAL Project [1] in which Indian and European universities worked in partnership to analyse needs of students and lecturers and common attributes across a partnership of universities and subject areas, develop design processes and implement electronic learning materials for use in a range of blended learning scenarios.

In order to support the development of these learning materials a series of face-to-face and online meetings between the partners on the EQUAL Project, including academics, students and learning technologists, explored theoretical concepts, ongoing design ideas and practical considerations. In this paper, we first present our analysis of the current state-of-the-art in assessment design and practice in relation to the needs of the project partners as well as significant theoretical and technological considerations. We then outline the design considerations identified as being essential for this project, followed by a presentation of the implementation of the tool. The results of our initial evaluation of the tool are then presented followed by conclusions and implications for future development among the partnership and for higher education more broadly.

2 Background and Literature Review

Terms such as formative assessment and summative assessment are now in common use in education but our experience suggests that these and related terms have varying meanings in different contexts and confusion remains; therefore establishing mutual understanding was a necessary first step in our project meetings. The term assessment itself is relatively unambiguous in that it refers to the checking of someone’s knowledge, understanding, skills or capabilities. However, this same term is applied both to the process of checking and to the outcome of this process. Furthermore, in some educational circles, the term evaluation is used in place of assessment. On the EQUAL Project and in this paper we have used evaluation as a broader term relating to understanding the overall successes and failures of a course or programme rather than for assessing students.

Four perspectives on assessment have been identified as shown in the model presented in Table 1 [2]. This model distinguishes between a focus on the assessment process and on the results of the assessment. Furthermore, the model refers to “Assessment FOR learning” and “Assessment OF learning” [3]; terms which have been used in various educational circles in place of formative assessment and summative assessment respectively in order to emphasise the purpose of assessment and its relationship with learning. Referring to Table 1, Perspective 1 is about students learning from feedback discussions and information provided during an assessment process. Perspective 2, focuses on using results of assessment for adapting teaching and learning processes. The third perspective is about the extent to which students understand the assessment process and are able and willing to engage with it. This perspective reminds us that ensuring that our assessments are accurate reflections of students’ achievements is by no means straightforward. This need to understand what will be assessed and how the assessments will be conducted becomes particularly significant when students are learning not only from the materials and teaching sessions that lecturers provide, but from a broad range of online opportunities not necessarily recommended by the lecturers, including for example MOOCs. Perspectives 1, 2 and 3 are all key elements of formative assessment, which by definition supports students’ learning and may be carried out by teachers, peers and/or students on themselves (self-assessment). Perspective 3 is important both for formative and summative assessment because in order to generate valid assessment information students need to understand the assessment process and engage with it. Perspective 4 is about making summative judgements for purposes of grading and accreditation. Clearly such judgements are important and necessary at transition points between elements of a programme of study and at the end. However, evidence suggests that students fail to attend to feedback comments when given grades [4] and hence overuse of summative judgements can be deleterious to students’ learning.

Table 1. Four ways to think about assessment [2]

When designing assessments, in addition to considering these four perspectives it is necessary to consider who/what is conducting and/or managing the assessment: student themselves, their peers, the teacher, a computer in an automated system. Self-assessment might be described as the gold standard of formative assessment. Students who are able to self-assess have the potential to become independent learners and to learn efficiently from the wide range of opportunities available including online materials and activities. The ability to self-assess is also necessary for self-regulated learning (SRL), a psychological construct which has been given much attention in recent years. SRL refers to an active, constructive process in which students intentionally set learning goals and then plan, monitor and regulate their cognitive, behavioural, emotional and motivational processes in the service of those goals in order to achieve optimal learning (Pintrich 2004). The evidence suggests that one of the best ways of developing students’ ability to self-assess is through peer assessment [see for example 5].

The process of peer assessment involves students assessing each other’s work against specified criteria and providing feedback to each other. For this to be a formative assessment process the feedback needs to focus on what the student has achieved and what they should do to improve their work, together with some ideas about how to go about this improvement [5]. In formative feedback, dialogue forms the mechanism by which the learner monitors, identifies and then is able to ‘bridge’ the gap in the learning process [see for example 6, 7]. Therefore, effective peer assessment processes become dialogic processes between students. Just as with self-assessment discussed above, a close relationship exists between good quality peer assessment processes and self-regulated learning. There is also a developing body of research in support of peer assessment as a summative assessment process. There is evidence that in some fields peer assessment is just as reliable as tutor assessment [8]. However, we believe that this does depend on the particular discipline and it is a practice that may meet resistance both by tutors and students. Specifically, in some of our partner universities there is a prevailing culture in which the expectation is for teachers to teach and provide feedback.

More generally, assessment practices in higher education have been changing and diversifying for some years. Currently new approaches are emerging based on developments in new technologies which are increasing the range of possibilities for assessments, including increasing opportunities for personalisation of assessments [9] and the capability for assessment to measure a broader range of knowledge and knowledge-in-action [10]. For example students can be assessed through simulations, e-portfolios and interactive games [11] rather than end of term exams and essays. The evidence is compelling that nature and form of assessment have a significant impact upon the student learning experience, approaches to learning, motivation, and retention rates [12].

In higher education, the nature of an institution often dictates how assessment practices have been developed. For example, open and distance learning environments have emphasized the necessity for formative assessment practices. Distance education in general has been proactive in formative assessment practices out of the need to find ways to provide systematic feedback and direction to students in the absence of the immediate contact and interaction that students have enjoyed with tutors in a campus setting [13]. However, in both types of environments, the impact of assessment on learning can be moderated by the use of appropriate assessment methods by teaching practitioners and practices have been supported/complemented by the use of computer assisted learning resources [14].

Computer-assisted assessment (most commonly in the form of automated quizzes or online objective tests) has been used by tutors in our partnership institutions. For instance, in a virtual learning environment (VLE) they may be used to monitor student understanding and progression. Text-based discussion fora (mainly asynchronous) has also been used for self and peer assessment purposes.

Online assessments require good Internet connections and discussions with some partner institutions revealed that infrastructure issues in India may render online assessments unfeasible in the short term. Therefore, consideration needed to be given to computer-based assessments that could be delivered off-line or within a local intranet. However, our expectations were that such technical problems would be resolved within a reasonable timescale and therefore institutions also need to look ahead to consider future options.

For assessment practices to be effective in relation to the four perspectives outlined above their place in the overall pedagogical design needs to be clear. Our view, in line with Black’s [7] five-stage model of assessment in pedagogy is that assessment considerations and actions need to be integrated in all aspects of pedagogy so that there is a match between the aims and the specific learning outcomes, the activities to support the aims and the methods of assessment. In particular, in relation to designing online materials, design of assessment must be incorporated from the initial stages of the design process just as when a teacher is planning a lesson the learning outcomes, activities and assessments need to be designed to be closely aligned [7]. These decisions include the purposes of the assessment, consideration of whether the assessment is self, peer teacher or automated process as well as what knowledge and skills are to be assessed.

So, designing assessment may need a shared framework that would support our thinking and discussion in relation to specifying learning outcomes (LOs) and designing learning activities assessments. Bloom’s taxonomy of educational objectives [15] was well known and respected by all our partner institutions. Bloom’s technology was originally developed to facilitate sharing of test items between university faculties. More recently a revised version was developed [16] to take account of advances in cognitive psychology and other developments since the original taxonomy was published. In our view the revised Bloom’s Taxonomy, while it does have some limitations, provides a useful framework for considering learning objectives and how to assess them.

Whereas Bloom’s original taxonomy is arranged as a one-dimensional hierarchy with a built-in expectation of progression between levels, the revised framework is two-dimensional. There is still an indication of a hierarchy but it is acknowledged that categories overlap and the constraint of the “cumulative hierarchy” has been removed [17]. The taxonomy is generally represented as a table (see Table 2)

Table 2. The taxonomy table [16]

The intention is that any learning objective can be characterised in terms of both knowledge and cognitive processes and thus can be categorised into one of the cells in the table. Using the table to examine alignment between learning objectives, instructional activities and assessments is a key aim of the development of the taxonomy [17].

3 Design Considerations and Decisions

Our literature review and ongoing discussions of important theoretical and practical considerations presented above indicated that decision-making regarding assessments is complex. Nevertheless, we were moving towards a framework to support our thinking and development. The framework incorporated the considerations for designing learning outcomes together with the technical considerations and opportunities for implementing in online environments with specific reference to Moodle tools as Moodle was the platform in use across our partner institutions We could have presented the framework as a text-based report but that was less likely to be used by teachers in our institutions than a more focused and practical online tool. Such a tool could also be implemented as a simple checklist but that would be less helpful to teaching practitioners and teaching teams in thinking about consequences of their decisions. We expected that the tool might be useful beyond the project, although in the first instance the specific context of the project was the priority for this initiative. A tool to support such decision making must take a simple transparent approach and make users aware of limitations. The tool was not intended to provide definitive advice but rather to support the decision-making process and professional development of teachers within the partner institutions. This support should be provided during the use of the tool and as a summary at the end of the teacher’s consultation with the tool.

The tool (accessible at: https://keats.kcl.ac.uk/course/view.php?id=34569) was designed to help the user in creating an assessment plan by asking a series of questions and highlighting the implications of their choices. The user begins with a Learning Outcome and proceeds by answering a series of questions concerning students, the context in which the assessment takes place, the nature of the knowledge and processes that are being assessed and the type of assessment including peer and self-assessment. With each question the tool also provides potential implications of the choices based on the theoretical and practical issues outlined earlier including the revised Bloom’s taxonomy considering intended learning outcomes. Upon answering the questions the user is presented with some recommendations about methods of assessment they may want to consider, and some potential ways to implement it using an authoring software and to be made available in a virtual learning environment (e.g. Moodle as used in the EQUAL Project see Fig. 1).

Fig. 1.
figure 1

Screenshots of assessment decision support tool (a) when the user is checking the implications of using “electronic off-line delivery” and (b) suggestions for assessments and ways of implementing after a user has input their series of answers.

The user can export all the recommendations and answers they gave into a PDF for reference.

4 Evaluation of the Decision Support Tool

The evaluation focused on: (1) examining how well the tool matched the expectations of the project members, and (2) gathering feedback to inform future updates. Data were gathered by a survey via SurveyMonkey and a focus group discussion in a workshop setup. Each of the partner institutions were asked questions concerning their expectations and experience of using the tool (using a combination of Likert scale type and open ended questions). Those who had used the tool to design assessment were additionally asked how they used it and were asked to provide links with examples. The workshop was part of a project meeting and all the partner institutions were represented. During the workshop the tool was demonstrated and then participants were invited to use the tool. The rationale for this approach was to also include participants that had not had a chance to try the tool beforehand. The discussion was captured as a series of statements on a flipchart visible to all participants.

The data collected both from the survey and the discussion were analysed. These comprised quantitative data from the survey and qualitative from the survey and the focus group discussion. The qualitative data were coded and themes were identified.

The tool workshop was attended by 16 members that participated in the focus group and in addition there were 7 participants’ responses to the questionnaire. Responses indicated that the tool met users’ expectations, typically “guiding the thought process preceding the creation of assessment situations”. Focus group results also suggested a need to link the various LOs together as the following comment illustrates: “Tool appears linear. It makes it difficult to link various assessments together.” This suggests respondents wanted a way to think about assessment for a module or programme as a whole. There were various suggestions on how to achieve that, such as: tying individual assessments to outcomes at the programme level, assessing a single LO in multiple ways, highlighting dependencies between LOs, etc.

It was also suggested that, apart from helping to situate the assessments in a wider context, the tool could also provide “more granular advice for certain assessment approaches”. In other words, some participants wanted specific advice and suggestions on practical implementation, for example how to test mathematics concepts, critical reading, “practical projects and assessment of field work”, etc. Both these comments and previous points about linearity are essentially about expanding the scope of the tool.

It was suggested that the tool was aimed at practitioners with a certain amount of knowledge and experience, rather than new lecturers, and it might be “more useful for people with some theoretical background (knowledge)”. In the tool design, there is indeed an assumption that the user is familiar with certain concepts such as the distinction between formative and summative assessment, Bloom’s taxonomy etc. A novice educator may not necessarily be familiar with all of the necessary background theory to use the tool effectively.

Currently the decision model used by the tool is quite straightforward in design. The final recommendation provided by the tool are influenced by the two final questions that the user is asked. Other questions are asked to help the user think about the various aspects of assessment, rather than to provide direct advice. The data suggested that the tool algorithm could be expanded to provide such advice in the form of “critical comments” and engage the user more in dialogue, for example by “warning that particular combinations [of choices] might not be optimal [for successful assessment design]”. However, expanding this algorithm would make the tool more sophisticated at the expense of diminishing its well received simplicity, by increasing the complexity of design and implementation.

Finally, there were a few comments on the presentation and structure of the PDF output document, and minor suggestions for navigation improvements. During the workshop the tool did not work for some people who were using a particular type of browser, Internet Explorer, so testing across more browsers would be beneficial as well as regular updates on what browsers are supported.

5 Discussion

In this section, we discuss the potential value and feasibility of implementing the suggestions from our evaluation results. Firstly, in terms of target audience, user perception of the value of the tool, our data suggest that users thought that the current version of the tool was most useful to an experienced teaching practitioner who is familiar with educational theory; they further commented that “targeting people with lesser understanding of underlying educational theories might be more useful”. This suggested that more needs to be done to introduce and explain different concepts used, not necessarily within the tool itself but perhaps by warning the user about the kind of (prerequisite) ideas they need to be familiar with before using the tool, and pointing them to sources they can use to learn about them.

Currently the tool can be used as a resource for those training other teachers in an academic development setting. The context of working collaboratively using the tool and learning from peers in a face-to-face setting seemed to be very effective in workshops we led using the tool, where the data were collected. This approach was consistent with the overall focus of the EQUAL Project which was to support the development of communities of practice [18] using blended learning wherever possible rather than entirely online learning. Learning design challenges for providing a stand-alone tool, which could be used for individual online learning, include ways of simulating the kind of situated learning where communities of practitioners converge in and around authentic practice, which is recognised to support effective professional learning [19, 20]. Bell and Morris [19] addressed these issues in the design of their online resources by providing a range of video clips of examples of practice taken from real contexts and with real practitioners. In the EQUAL Project we had already made a start on identifying examples of good practice across the project by developing a series of vignettes of assessment design practices and these might become the basis for a more comprehensive resource.

Consistent with the approach of Bell and Morris was a clear indication that the participants would like the tool to include more of the context in which assessment is developed. A few specific ideas that illustrated this were:

  • relating individual LOs for sessions with overall module and programme LOs;

  • highlighting dependencies between LOs;

  • assessing one LO in different ways, etc.

On the other hand, more granular advice about implementation of assessment beyond existing guidance was also strongly suggested. There were also recommendations for including more critical feedback, such as highlighting to the user that their combination of choices was not optimal, identifying common prior conceptions, or providing suggestions for improvement based on best practice.

6 Conclusion

Overall users were satisfied with the tool, as the data show, and were positive about using it in designing assessment during the project. The fact that respondents predominantly focused on discussing enhancements and how the tool might evolve, rather than correcting or criticising its shortcomings, is a strong indication that the tool was well received. The survey results also suggest that the tool has achieved the purpose it was designed for. Furthermore, as sustainability of the project deliverables (to which the tool belongs) is a major concern of the participants, the suggestion the tool could be relevant and useful for educators beyond the project and become part of the project’s legacy is a very positive reflection of its value.

In addition to the educational and design consideration discussed above all of the recommended changes would require a fundamental redesign of the underlying algorithm of the tool. Additionally, the software used to develop the tool was not designed to handle this order of complexity, so migrating to a different development environment such as JavaScript might be required. As such, a complete redevelopment might be the only way to incorporate design recommendations, the feasibility of which would have to be carefully evaluated.