Skip to main content

Best of Two Worlds: Using Two Assessment Tools in One Course

  • Conference paper
  • First Online:
Technology Enhanced Assessment (TEA 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1014))

Included in the following conference series:

  • 575 Accesses

Abstract

This paper reports on practical experiences with the two e-assessment tools AlephQ and JACK, explains their key features and sketches usage scenarios from two different universities. Using a lecture in accountancy as a concrete example, the paper then presents a successful concept for improving a lecture by introducing both e-assessment systems. Conclusions are drawn on how to improve a lecture by selecting and combining the most suitable features from different tools.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ridgway, J., Mccusker, S., Pead, D.: Literature review of e-assessment. A NESTA Futurelab Research report - report 10 (2004)

    Google Scholar 

  2. Malmi, L., Korhonen, A.: Automatic feedback and resubmissions as learning aid. In: IEEE International Conference on Advanced Learning Technologies, pp. 186–190 (2004)

    Google Scholar 

  3. Venables, A.; Haywood, L.: Programming students NEED instant feedback! In: Proceedings of the Fifth Australasian Conference on Computing Education, vol. 20, pp. 267–272. Australian Computer Society, Inc. (2003)

    Google Scholar 

  4. Schwinning, N., Striewe, M., Savija, M., Goedicke, M.: On flexible multiple choice questions with parameters. In: Proceedings of the 14th European Conference on e-Learning (ECEL) (2015)

    Google Scholar 

  5. IMS Global Learning Consortium: IMS Question & Test Interoperability Specification, Revision 2.2.1 (2016). http://www.imsglobal.org/question/

  6. Schwinning, N., Schypula, M., Striewe, M., Goedicke, M.: Concepts and realisations of flexible exercise design and feedback generation in an e-assessment system for mathematics. In: Joint Proceedings of the MathUI, OpenMath and ThEdu Workshops and Work in Progress track at CICM, co-located with Conferences on Intelligent Computer Mathematics (2014)

    Google Scholar 

  7. Saul, C., Wuttke, H.-D.: E-assessment meets personalization. In: IEEE Global Engineering Education Conference (EDUCON), Berlin, Germany, pp. 200–206 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Striewe .

Editor information

Editors and Affiliations

Annex: Longitudinal Statistical Analysis of Results Taking the Qualitative Evolution of the Accountancy Course into Consideration

Annex: Longitudinal Statistical Analysis of Results Taking the Qualitative Evolution of the Accountancy Course into Consideration

A Brief Description of the Qualitative Evolution of the Accountancy Course

The authors have a well-documented record of results, course content & delivery and evaluation methods from 2005 to the present day. Up until the academic year 2007–2008 examination took place in one final exam, where students were asked to record approximately 40 journal entries from one or more case studies. A journal entry is a basic representation of a business transaction in the general journal of accounts as part of a company’s accounting system. However, it doesn’t show the impact of the transaction on the annual accounts, which requires an additional understanding.

The lecturing team therefore decided from 2010 onward to test this additional competence on the exam, by including questions that required the completion of (partial) annual accounts. At the same time, the number of basic journal entries was reduced. In support, home assignments were introduced to practice the skill of completing annual accounts. Inspired by the Khanacademy and other online learning tools, the team started in 2015 with a video clip library to explain some of the more technical entries and annual account updates which the students could watch a their own pace. The table below shows the complete chronology.

Academic year

Format exam

Partition exam

Journal entries per exam

Annual accounts in exam

Home assignments

Video clips

2005–2006

Paper

1 final exam

42

No

No

No

2006–2007

Paper

1 final exam

33

No

No

No

2007–2008

Paper

1 final exam

38

No

No

No

2008–2009

Paper

2 partial exams

26

No

No

No

2009–2010

Electronic

2 partial exams

22

No

No

No

2010−2011

Electronic

2 partial exams

25

Yes

Yes

No

2011–2012

Electronic

2 partial exams

?

Yes

Yes

No

2012–2013

Electronic

2 partial exams

21

Yes

Yes

No

2013–2014

Electronic

2 partial exams

27

Yes

Yes

No

2014–2015

Electronic

2 partial exams

20

Yes

Yes

No

2015–2016

Electronic

2 partial exams

13

Yes

Yes

Yes

2016–2017

Electronic

2 partial exams

19

Yes

Yes

Yes

2017–2018

Electronic

2 partial exams

20

Yes

Yes

Yes

Statistical Analysis of the Results: What Is the Impact of Electronic Assessment and the Use of Home Assignments

For each academic year, the authors had access to all exam results. The June results were taken for comparison, ignoring the retake exam in September. As of the year 2008, the result in June is computed as the sum of two exams: an exam in January (40% of the mark) and an exam in June (60% of the mark). Zero scores are ignored, because of a number of students merely present themselves on the exam for administrative reasons, and those reasons vary throughout the period. In view of the large number of exam question entries, it is highly unlikely that a real exam attempt would result in a score of zero.

The exam results are split in two groups:

Group Situation 1 has the following features

  • Exams were on paper. Each exam was only corrected once.

  • There were no home assignments.

  • No use was made of wiki or discussion board.

  • No use was made of video (knowledge clips).

Group Situation 2 has the following features:

  • Exams were electronic. The correction of the exams happens in several iterations. In each iteration, the corrector actively searches for alternative correct answer options in the given answers.

  • There were four home assignments.

  • Wiki and discussion board are used to support the home assignments.

  • Video (knowledge clips) are used to support the home assignments or seminars (started in 2015).

Situation 1 contains the exam results of the following academic years: 2004, 2005, 2007 and 2008. The 2006 data had to be excluded due to incompleteness of the data set. In addition to the results of the freshman students (1st bachelor year), the 2008 data also includes the results of students of the bridging program from the Master’s degree Organization and Management.

Situation 2 contains the exam results over the period 2010–2017. The 2009 data was excluded because although it was the first year that the exams where electronic in that year there were no home assignments yet, making it difficult to assess to which category it would belong.

If we work under the assumption that all exams have a similar degree of difficulty, the hypothesis that education is improved can be validated by the fact that the average increases and the variance decreases. The data shows (see: F-test Two-Sample of Variances and t-Test: Two-Sample Assuming Unequal Variances) that Situation 2 is an educational improvement compared to Situation 1. The educational context of Situation 2 could therefore be explained by the introduction of new educational tools like a learning environment and assessment tools like AlephQ and JACK.

Supporting Data

F-Test Two-Sample for Variances

 

Situation 1

Situation 2

Mean

8,02004008

9,53185145

Variance

18,71137754

13,76857577

Observations

1996

3689

df

1995

3688

F

1,358991514

 

P(F ≤ f) one-tail

1,25094E−15

 

F Critical one-tail

1,066413374

 
  1. Based on the F-Test Two-Sample for Variances we may reject the assumption of equal variance.

t-Test: Two-Sample Assuming Unequal Variances

 

Situation 1

Situation 2

Mean

8,02004008

9,53185145

Variance

18,71137764

13,76857577

Observations

1996

3689

Hypothesized mean difference

0

 

df

3592

 

t Stat

−13,20534523

 

P(T ≤ t) one-tail

3,25111E−39

 

t Critical one-tail

1,645277949

 

P(T ≤ t) two-tail

6,50222E−39

 

t Critical two-tail

1,960624635

 
  1. Based on the t-Test: Two-Sample Assuming Unequal Variances we may reject the assumption of equal mean.

Situation 1 – Data description

 

2004

2005

2006

2007

2008

Mean

8,47301

6,78667

5,55769

8,36900

8,18346

Standard error

0,18333

0,20439

0,59053

0,20417

0,16830

Median

9

7

4,5

8

10

Mode

1

10

1

8

10

Standard deviation

3,61589

3,95798

4,25839

4,36934

4,68213

Sample variance

13,07466

15,66560

18,13386

19,09112

21,92231

Kurtosis

−0,67673

−0,70065

−0,30048

−0,85753

−1,23072

Skewness

−0,10685

0,30545

0,78501

0,14507

−0,18974

Range

15

17

16

17

17

Minimum

1

1

1

1

1

Maximum

16

18

17

18

18

Sum

3296

2545

289

3833

6334

Count

389

375

52

458

774

Situation 2 – Data description

 

2009

2010

2011

2012

2013

2014

2015

2016

2017

Mean

10,11472

9,96038

9,61943

9,24473

9,69196

8,87333

9,26549

9,70153

9,87958

Standard error

0,17019

0,16750

0,16330

0,16397

0,15819

0,16716

0,17914

0,18342

0,19756

Median

10

10

10

10

10

9

9

10

10

Mode

13

11

10

10

9

11

9

12

12

Standard deviation

3,89201

3,85615

3,62951

3,56979

3,34834

3,546D0

3,80866

3,92954

3,86126

Sample variance

15,14773

14,86988

13,17333

12,74337

11,21139

12,57412

14,50586

15,44129

14,90935

Kurtosis

−0,40575

−0,49654

−0,52408

−0,61191

−0,44519

−0,51226

−0,51574

−0,65130

−0,63667

Skewness

−0,22462

−0,16950

−0,01994

−0,16840

−0,19825

−0,17153

−0,14121

−0,30497

−0,31694

Range

IB

18

18

17

18

17

17

17

17

Minimum

1

1

1

1

1

1

1

1

1

Maximum

19

19

19

18

19

18

18

18

18

Sum

5290

5279

4752

4382

4342

3993

4188

4453

3774

Count

523

530

494

474

448

450

452

459

382

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Deuss, R., Lippens, C., Striewe, M. (2019). Best of Two Worlds: Using Two Assessment Tools in One Course. In: Draaijer, S., Joosten-ten Brinke, D., Ras, E. (eds) Technology Enhanced Assessment. TEA 2018. Communications in Computer and Information Science, vol 1014. Springer, Cham. https://doi.org/10.1007/978-3-030-25264-9_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-25264-9_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-25263-2

  • Online ISBN: 978-3-030-25264-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics