Encyclopedia of Educational Innovation

Living Edition
| Editors: Michael A. Peters, Richard Heraud

Assessment in Entrepreneurship Education

  • Luke PittawayEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-981-13-2262-4_175-1
  • 98 Downloads

Introduction

Assessment practice as a topic of interest has grown in importance in entrepreneurship education research over the last decade (Pittaway and Edwards 2012). Typically, research had focused extensively on pedagogy, the propensity, and intentions of students to become entrepreneurs and typologies of entrepreneurship education but had ostensibly left out of consideration assessment methods and approaches. The topic of assessment practice is, however, not as straightforward as it initially might seem. Assessment is clearly an important part of academic practice as explained by the Centre for Study in Higher Education in Australia:

Assessment is a central element in the overall quality of teaching and learning in higher education. Well-designed assessment sets clear expectations, establishes a reasonable workload and provides opportunities for students to self-monitor, rehearse, practice and receive feedback. Assessment is an integral component of a coherent educational experience. (See www.cshe.unimelb.edu.au/assessinglearning/05/ for more information)

Many higher education systems have government agencies tasked with ensuring the quality of assessment, such as the Quality Assurance Agency (QAA) in the United Kingdom, and effective evaluation of assurance of learning is considered to be a means to protect educational standards. Indeed, assessment can be seen as part of a university’s core business, providing officially documented judgements of student performance and public qualifications. The scholarship of assessment has been defined as “…sophisticated thinking about assessment” (see Banta 2002), and it is a broad subject that includes institutional assessment, program assessment, teacher assessment, and student assessment.

Institutional assessment typically involves external agencies accrediting the entire institution regarding the validity of its programs and assessment practices. An example of this form would include US regional accrediting institutions, such as the Higher Learning Commission (HLC) which serves the US Midwest and the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC), which serves the US southeast. Institutional assessment can also occur at the college and/or program level; examples include the AACSB International, which accredits Business Schools and the Association of MBAs (AMBA), which accredits MBA programs, mostly in Europe. In contrast, teacher assessment is focused more narrowly on methods and approaches to assess the performance of individual teachers and often includes student evaluations, teaching portfolios, and peer observation. Student assessment, however, is focused on understanding the extent to which students have gained outcomes from their courses and is often designed to both encourage learning (formative assessment) and to assess alignment between intended learning objectives and actual learning outcomes (summative assessment).

Common debates exist in the wider research on assessment practice. There is, for example, a tension in the assessment domain between publically driven requirements to judge student performance (summative assessment) for the purposes of public qualifications and educators’ concern that assessment enables learning (formative assessment). Concern exists about the extent to which assessment practice can both assist student learning and cope with the pressures imposed on it by accreditation agencies to provide outcome-based, credit-based assessment techniques that assure that learning and educational impact has occurred. Much of the broader literature on assessment practice focuses on technical concerns. These include instructor’s choices about the balance between formative and summative assessment; the coherence of assessment practices across programs; the role of multiple graders in the process; the role and design of assessment rubrics; the productivity of assessment strategies; and the impact and role of feedback in the assessment process. Further common discussions occur in the wider assessment literature about the role of self and peer assessment; the challenges of assessing in more innovative learning designs; and the tension between norm-referenced assessment practices and criteria-based assessment.

Many of these wider conversations and concerns can be found in research on assessment practice in entrepreneurship education. There has, for example, been an increased focus on whether public investment in entrepreneurship education has led to appropriate impacts on society (i.e., an increase in institutional assessment), there has been new focus on how to assess programs, and there has been increased study exploring the alignment between learning objectives and learning outcomes. The next part will explore these current considerations regarding assessment practice in entrepreneurship education.

Assessment in Entrepreneurship Education

The worldwide growth of entrepreneurship education starting in the 1970s and accelerating in the 1990s is well documented. Today most universities have a range of classes in entrepreneurship, and many have full degree programs. Alongside this expansion in taught classes has been a growth in extracurricular offerings, such as competitions, events, and student-run clubs, as well as the formation of dedicated centers, institutes, and schools. The increasing focus on entrepreneurship is driven by a change in the nature of the labor force and the economy and a concomitant recognition that society needs to prepare young people for more flexible career paths that might include self-employment, new venture creation, and small business management. As entrepreneurship education has grown, it has also changed, having moved from being dedicated purely to venture creation to focusing more on developing wider entrepreneurial mind-sets and competencies, and it has moved from being purely a business school-led phenomenon to being one that is offered across the university. As entrepreneurship education practice has grown, so too has the research on the subject. Reviews of the subject consistently demonstrate an expansion in the research domain but also highlight an overwhelming focus on the following topics:
  1. (i)

    Pedagogy – a focus on the methods, design, and strategy associated with specific educational interventions

     
  2. (ii)

    Propensity, intentionality, and self-efficacy – aimed at understanding the extent to which entrepreneurship educational interventions change student’s perceptions and behaviors in critical ways

     
  3. (iii)

    Entrepreneurial learning – studies on how entrepreneurs learn in the context of their work and the implications this may have for the design of educational practice

     
  4. (iv)

    Measurement and evaluation – growth in research on the extent to which entrepreneurship education creates outcomes that are sought after by educators and policy-makers

     
  5. (v)

    Typologies and taxonomies – consideration of the different forms and philosophies guiding educational practice and typologies designed to make sense of these differences

     
  6. (vi)

    Context and application – work that explores the different contexts and practices that might be relevant to entrepreneurship educators

     

Many topics in the research domain have been largely overlooked, and this is true of assessment practice (Duval-Couetil 2013), with researchers in entrepreneurship education widely noting the absence of study on the subject. Research that focuses on student evaluation and program evaluation has grown recently, but institutional assessment and teacher assessment remain largely under-researched.

Student Assessment

Research on student assessment practice focuses on the context of classes and courses and considers how individual professors apply assessment techniques to assist student learning and to assess student performance. The logic for this focus has been that if entrepreneurship education pedagogies should be innovative, then assessment practice within classes should likewise be innovative. Pittaway et al. (2009) conducted one of the early studies of this topic in entrepreneurship education exploring focus groups with 40+ educators and their idealized student assessment techniques, given specific learning outcomes. The study demonstrated some principles of assessment in the context of entrepreneurship education:
  1. (i)

    Assessment should be valid, reliable, and consistent.

     
  2. (ii)

    The purpose of assessment should be clearly explained.

     
  3. (iii)

    The amount should be appropriate.

     
  4. (iv)

    The criteria should be understandable, explicit, and transparent.

     
  5. (v)

    Assessment should be based on understanding of how students learn.

     
  6. (vi)

    It should accommodate individual differences in students.

     
  7. (vii)

    Assessment procedures should allow students to receive feedback on their learning.

     
  8. (viii)

    Assessment should provide staff and students with opportunities to reflect on their practice and their learning.

     
  9. (ix)

    Assessment should be an integral component of the course design.

     

When considering these principles, there were some challenges facing entrepreneurship education. The first challenge was that assessment should be based on an understanding of how students learn. This is a problematic issue because there are different interpretations about “entrepreneurship” and “enterprise” and consequently differences between educators regarding what should be taught and how it should be taught. Entrepreneurial outcomes can be diverse from a focus on for-profit venture creation to wider expectations about the acquisition of an “entrepreneurial mind-set,” and expectations about learning outcomes have expanded as entrepreneurship education has become a university-wide, rather than a business school, phenomenon. Indeed, the Quality Assurance Agency (QAA) in the United Kingdom (UK) has formally made this distinction highlighting a difference between “enterprise education,” being focused on skills and mind-sets, and “entrepreneurship education,” being more narrowly concerned with venture creation. The second challenge is that conventional methods of assessment practice tend to be picked up by educators (e.g., tests, exams, essays), but these methods may not be appropriate to the entrepreneurial lifeworld and, therefore, may not be seen as authentic by those engaged in entrepreneurial practice.

Calls have thus been made to expand the range and innovativeness of student assessment so that it mirrors actual practices in the subject domain. So, for example, pitches, business plans, business models, trade shows, MVP Demo days, term-sheets, and one-page executive summaries, among many other methods, have been suggested as alternative approaches to assessing students in entrepreneurship classes. Research has subsequently sought to consider different forms of entrepreneurship education (e.g., about, for, through, and in) and how these might impact on different assessment methods (Pittaway and Edwards 2012) and has focused on encouraging educators to be more deliberate with their approach to constructive alignment (Morselli and Ajello 2016), ensuring that assessment methods properly align with intended learning outcomes. Categories of assessment practice have also been identified, such as assessment “for,” “of,” and “as” learning (Draycott et al. 2011), where “of” focuses on teacher-led summative practices, “for” focuses on assessment that is formative encouraging student growth and learning, and “as,” where assessment is more open and placed in the learner’s hands. Research on student assessment has, therefore, called on educators to become much more innovative and aligned with practice in their approaches and suggested that they consider the different forms of entrepreneurship education and how their practices align with the form they are engaged in.

Program Assessment

Program assessment involves the overall assessment of the program, the extent to which it achieves its wider objectives and the extent this is validated with external stakeholders. The idea being that assessment is most useful when it is focused on programs rather than individual students, highlighting that program assessment allows educators to see whether a program makes sense in its entirety. Growth of entrepreneurship education has led for calls to begin a more systematic assessment of the impact of entrepreneurship programs. In some countries significant public investment in the establishment of programs has contributed to these calls. Indeed, as entrepreneurship education has moved from offering individual classes to offering bachelor, certificate, and masters programs, calls have grown for there to be more focus on program assessment. There have, however, been relatively few studies on this topic, and it is evident that while it is a topic of concern among educators, there is little evidence of resources being dedicated to advance program assessment efforts in practice (Duval-Couetil 2013). Key considerations include what should graduates acquire from a program? Has this been achieved? How can it be improved? And, does this map against the expectations of external stakeholders, for example, donors, entrepreneurs, and policy-makers? Duval-Couetil (2013) in her work provides six general steps to assure effective program assessment and builds these into a model that can be used by administrators and leaders of entrepreneurship programs. At present there is a great deal of work that needs to be carried out in this area to overcome the following issues:
  1. (i)

    The body of knowledge in entrepreneurship is not generally agreed, and so significant diversity occurs concerning what content should be included, and this often varies considerably between institutions.

     
  2. (ii)

    Heterogeneity, while a positive force in the growth of entrepreneurship education, is simultaneously a weakness because it limits the ability for program assessment to be applied across contexts.

     
  3. (iii)

    The emphasis of entrepreneurship education on practice and experiential learning leads to a high level of involvement of nonacademic practitioners, which can create challenges for traditional learning design, such as the inclusion of learning objectives, constructive alignment in assessment practices, and the use of measurable outcomes.

     
  4. (iv)

    Often programs and institutions assume that venture creation among students and economic development outputs should be used as a measure of impact, rather than student learning outcomes.

     

Assessment of programs, therefore, remains a critical topic in the subject, with much future study required.

Institutional Assessment

Due to the growth of entrepreneurship programs noted and the level of investment, some countries, for example, in the European Union (EU), have developed national frameworks and institutions for monitoring outcomes. Probably the best example is the European Qualifications Framework (EQF), which is used to assess learning outcomes in entrepreneurship education alongside other skills. Likewise the QAA in the UK has specific benchmarks created to gauge the success of entrepreneurship education in English universities. Scandinavian countries in particular have become much more focused on understanding the impact that entrepreneurship programs have on students but are also focused on impacts for the economy and society more generally (Morselli and Ajello 2016). The few studies that have been carried out to this point do demonstrate impact. Educational impact in entrepreneurship education has been observed at the individual level with European Commission studies demonstrating improved career ambitions, more employability, and improved entrepreneurial competence. Broader impacts on actual venture creation, increased entrepreneurial self-efficacy, intentionality, and economic growth have, however, shown mixed results. Certainly studies have shown improvements in intent to start a business and growth in self-confidence from entrepreneurship education, but there have also been contradictory studies with contrasting results. Likewise the impact of entrepreneurship education on actual venture creation among students has been questioned, both as an objective and as an actual outcome. It remains difficult, therefore, to argue that there is a relationship between entrepreneurship education and wider impacts on economic development as more people become entrepreneurs, which is sometimes the intended outcome of policy interventions supporting such education. Conversations at conferences have begun to focus on the need to create more “entrepreneurship education accreditation,” and there are fairly strong views on the subject. On the one side, many argue that entrepreneurship education has been dynamic, constantly moving and adapting, and inherently multidisciplinary. The view here is that excessive accreditation may dampen this spirit of innovation. On the other side, the sheer growth, expansion, and diversity of what is described as entrepreneurship education have some concerned that the essence of what it “is” may be lost. Here the argument for more accreditation is that it will ensure some level of definitional clarity and coherence around expected educational outcomes from entrepreneurship education. As more centers, institutes, and schools of entrepreneurship get founded, it should be expected that accreditation and assurance of learning pressures will likely increase, and so this remains an important topic for research and debate.

Teacher Assessment

The subject of teacher assessment remains an important topic in the wider research in assessment practice in education. Teacher assessment research tends to focus on the best ways to assess, understand, and improve teacher performance in the classroom. The subject has been largely neglected so far within the domain of entrepreneurship education and is ripe for more detailed study.

Conclusion

As entrepreneurship education has expanded, research on the topic has grown. Prior reviews on entrepreneurship education consistently show that the topic of assessment practice has been under-researched. In the last decade or so, a few studies have been developed, and the wider field of research on assessment practice in education is available to researchers and educators. Despite this the evidence base remains thin. Studies have shown that student assessment practice has remained rather traditional and has not tracked innovations in pedagogy particularly well and that there is often a poor alignment between intended learning outcomes and assessment techniques used. In addition, the current research shows that understanding assessment practice in entrepreneurship education starts with understanding the form of educational practice and underlying philosophy of the approach taken by the instructor.

Program assessment likewise shows that questions of impact are becoming more important for entrepreneurship educators. The current state of knowledge does demonstrate student learning outcomes and impact that are valuable, but these outcomes do not always meet with the expectations of policy-makers who sometimes expect increased venture creation among students as a key outcome of programs. The topics of institutional assessment and teacher assessment show that there is much scope for future research in this field and many opportunities for aspiring researchers.

Cross-References

References

  1. Banta, T. W. (2002). Building a scholarship of assessment. San Francisco: Jossey-Bass/Wiley.Google Scholar
  2. Draycott, M. C., Rae, D., & Vause, K. (2011). The assessment of enterprise education in the secondary education sector: A new approach? Education and Training, 53(8–9), 673–691.CrossRefGoogle Scholar
  3. Duval-Couetil, N. (2013). Assessing the impact of entrepreneurship education programs: Challenges and approaches. Journal of Small Business Management, 51(3), 394–409.CrossRefGoogle Scholar
  4. Morselli, D., & Ajello, A. (2016). Assessing the sense of initiative and entrepreneurship in vocational students using the European qualification framework. Education and Training, 58(7–8), 797–814.CrossRefGoogle Scholar
  5. Pittaway, L., & Edwards, C. (2012). Assessment: Examining practice in entrepreneurship education. Education and Training, 54(8/9), 778–800.CrossRefGoogle Scholar
  6. Pittaway, L., Hannon, P., Gibb, A., & Thompson, J. (2009). Assessment in enterprise education. International Journal of Entrepreneurial Behaviour and Research, 15(1), 71–93.CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Department of ManagementOhio UniversityAthensUSA

Section editors and affiliations

  • Michael Breum Ramsgaard
    • 1
    • 2
  1. 1.VIA University CollegeAarhusDenmark
  2. 2.Research and Development Centre for Innovation & EntrepreneurshipVIA University CollegeAarhus NDenmark