Introduction

Today’s undergraduate and graduate students are a product of the digital revolution. Philosophically, they see a need to shape jobs to fit their lifestyles versus their lifestyles fitting their jobs. Those entering the workforce seek regular feedback from a variety of sources and steadily desire increasing job responsibilities and decision-making involvement. They work in diverse, globally competitive organizations that are often more oriented to knowledge management than manufacturing management. Preparing these students for the workplace requires a recognition of new learning styles and thus demands new teaching styles.

A 2002 report by the Association to Advance Collegiate Schools of Business (AACSB) noted, “preparation for the rapid pace of business cannot be obtained from textbooks and cases, many of which are outdated before they are published” (p. 19). Further, the report called for “outward-facing curricula and experiential education [that] can create the critical intersection between classroom and business learning that keeps faculty and students connected to rapidly changing business models” (p. 20). With the publishing of this report, colleges and universities began a slow but steady reorientation to the materials covered in class and the manner in which that information was presented. For many universities, experience-based learning became the focus of faculty meetings and retreats as instructors sought to define the concept and develop methods and materials that exemplified that approach.

So where does higher education stand a decade plus after this landmark report on the subject of experiential learning (EL) and teaching? What are the teaching methods most commonly used within this realm? And most importantly, how are the work products of experiential-based learning evaluated so that there can be a clear gauging of how effectively the transfer of learning has occurred?

This chapter will be structured as follows:

First, we will define what experiential learning is and the different operationalizations of the pedagogy. Then, we will continue with a brief overview of literature to identify various assessment approaches commonly used to evaluate the effectiveness of experiential learning. We will finally conclude by discussing the appropriateness of the various experiential learning approaches and assessment tools in achieving the specific learning goals and outcomes.

What Is Experiential Learning?

Experiential learning (EL) has been variably defined within the context of higher education. The earliest definition of EL can be traced to Rogers (1969) (as stated in Hoover, 1974) who defines EL as “It has a quality of personal involvement—the whole-person in both his feeling and cognitive aspects being in the learning event”.

The most well-known definition of EL in the domain of management education is the one provided by Kolb (1984). He (1984) defines EL as “a holistic integrative perspective on learning that combines experience, cognition and behavior”.

However, this definition has been overly simplified to “learning by doing” (Hoover, 1974, 2008). Taking the notion of whole person learning and the inclusion of affect and behavioral components in addition to cognition, Hoover (1974) provides an expanded definition: “Experiential learning exists when a personally responsible participant (s) cognitively, emotionally, and behaviorally processes knowledge, skills and/or attitudes in a learning situation characterized by a high level of active involvement” (p. 35).

Thus, it appears that the involvement of the whole person and the impact on affective and behavioral components in addition to cognition are necessary conditions for a pedagogical approach to be termed experiential learning.

Experiential Learning Versus Learning from Experience

Learning from experience is what takes place in the daily context of individuals whereas experiential learning is concerned with creating the context within which learning is expected to take place (Usher & Solomon, 1999). Thus, the deliberate and formal process of creating experience with the explicit intention of learning is the focus of experiential learning. Andresen, Boud, and Cohen (2000) define the characteristics of experiential learning that make it different from other approaches. They state that for learning to be experiential, three necessary conditions must exist at some level:

  1. 1.

    Involvement of the whole person

  2. 2.

    Recognition and active use of all the learner’s relevant life and learning experiences

  3. 3.

    Continued reflection on earlier experiences in order to add and transform them into deeper understanding

However, from an implementation point of view, EL differs from other approaches in three ways:

  1. 1.

    Intentionality of design in the situation or experience to ensure learning

  2. 2.

    Facilitation by an outside agent (facilitator, teacher, etc.) whose skill level may influence the outcome of learning

  3. 3.

    Assessment of learning outcomes to understand what has been achieved through the EL approach

But as Gentry (1990) notes, what students achieve through experiential learning is usually a function of their perceptions and often beyond the control of the facilitator of the EL (teacher). The objective of the provider in the EL process is, therefore, to create an environment that provides the necessary stimulus for gaining the intended learning from experience.

A variety of pedagogical approaches are available to educators in business (in general) and international business educators (in particular). These include but are not limited to—case studies, simulations, group projects, internships, and field trips.

In the next section of this chapter, we will take up each of these pedagogical approaches and discuss how these approaches are assessed and what aspects of learning are evaluated in each technique. After a careful review of the extant work on the various assessment tools available to evaluate EL, we will discuss an approach that can be utilized to determine what methods are best suited to assess particular learning outcomes. We believe that such a framework of assessment tools and their appropriateness in assessing specific outcomes would help adopters of EL techniques to develop better evaluation metrics.

An Overview of Assessment Tools Used in Experiential Learning

In this section, we will briefly discuss the assessment tools currently and commonly used for specific experiential learning approaches in various business disciplines. While we understand that a thorough literature review and analysis of all experiential learning approaches and their assessment is most desirable, space limitations prevent us from delving too deeply into any particular EL approach. Our intention is to provide an overview of commonly used EL approaches and the most frequently used assessment tools within those approaches. Thus, an educator intending to adopt EL approaches in their pedagogical repertoire would have an idea of what common assessment approaches are available and how suitable they are to assess specific learning outcomes.

Case Studies

The case study method is defined as “a method that involves studying actual business situations—written as an in-depth presentation of a company, its market, and its strategic decisions—in order to improve a manager’s or student’s problem-solving ability” (Helms, 2006). Thus, by offering a dilemma or a problem situation to students, a case study lets students experience the same uncertainties faced by the decision maker and allows students an opportunity to develop a viable recommendation to address that dilemma by integrating and applying conceptual learning from the discipline.

The earliest recorded use of case studies for student learning can be traced to 1870 when legal cases were utilized in classrooms at Harvard Law School. This approach was then adopted by the Harvard Business School around 1920 (Klonoski, 2013). The case study, therefore, represents one of the oldest experiential learning approaches in business education. Case studies have been used across various disciplines within business including marketing, management, human resources, finance, and international business.

Lundberg, Rainsford, Shay, and Young (2001) identify nine different types of cases based on their experience of writing and teaching cases. These are—iceberg cases (focusing primarily on the application of conceptual knowledge to a situation), incident cases (enables students to compare their decisions against generally accepted practices of the discipline), illustrative cases (to illustrate the application of business practices to a scenario), head cases (to understand the reasoning and actions of the principal actor in the scenario), dialogue cases (to understand interactional dynamics and consequences of style within a situation), application cases (a situation where a known technique could be applied to solve a problem), data cases (to sift through information and organize them in a meaningful way such that a decision can be made), issue cases (to understand and appreciate context and antecedents of a scenario), and prediction cases (to make predictions based on conceptual models in a given situation).

Given this typology of case studies in business, one can easily see how assessment of case studies becomes a critical issue in evaluating student learning. Due to the diverse nature of outcomes, case studies are used to evaluate various components of student learning. According to Dixit et al. (2005), case studies aim to achieve learning in the form of an improved range of skills and attitudes of the students. Also, depending on the type of case being used, behavioral (decision making, empathizing), conceptual (pattern recognition, abstract generalizations), and technical (use of frameworks/models, check lists) outcomes could be possible student learning elements from a case study.

Based on past research, Sridevi (2012) identifies nine different skills that are commonly assessed using case studies. These are: Analytical decision making, application, oral communication, time management, interpersonal, creative skills, self-analysis, and written communication skills. The versatility of learning goals of a case study as listed by Dixit et al. (2005) and Sridevi (2012) is consistent with the varied types of cases commonly used in business and with those in Lundberg’s et al. (2001) aforementioned typology.

Anderson and Schiano (2014) suggest three stages in which cases could be evaluated—before class, during class, and after class. They also provide different ways to evaluate at each of those stages (oral and written formats; individual and group level). Short written assignments and prepared presentations are recommended for pre-class assessment. Class participation seems to be the most recommended approach to evaluate learning during class time. Reflective papers, examinations, and group presentations of analysis are the most suggested approaches for evaluating learning after class.

Unlike a traditional examination, however, evaluating learning from a case study is not easy as there is seldom a “right” answer to a problem situation. Each problem can be handled in multiple ways, each one being equally effective in addressing the problem situation. Similarly, evaluating learning from case studies must provide for the optional “correct” answers. Corey (1998) proposes using class participation (a simple 1–5 point Excellent—Unsatisfactory scale) and patterns of performance over multiple sessions or in examinations (using a case as an exam question) to evaluate students. To more objectively detail the learning outcomes, Corey (1998) suggests the use of elements such as clarity of problem definition, drawing of relevant conclusions based on the observations of case information, design and appropriateness of quantitative analysis to address relevant questions, reasoning behind recommendations, and plan of action when evaluating student learning from cases.

Focusing on written reports, O’Keefe and Chadraba (2013) recommend standardization of student case report formats as well as evaluation rubrics for assessing these reports. They recommend evaluating reports on three dimensions—structure, substance, and style. This standardization approach to grading reports helps instructors to grade all students consistently. Interviews with students and instructors on the benefits of using such a standardized assessment reveal that this approach allows the students to receive the feedback quickly and helps improve their recommendation generation and presentation for future submissions. They found the rubric easy to understand and preferred the coded feedback to incomprehensible long text-based feedback. For the instructors, this standardized approach makes grading many reports easy and helps in faster turnaround. Having coded feedback based on a standard rubric helps instructors to track student improvements over time as well as track key areas of concern for a group of students.

Markulis (2007) suggests using presentations to evaluate learning from cases. He provides a rubric (p. 92), which also evaluates substance, structure, and style components in the presentation. However, he found that prepared presentations did not result in engaged discourse or effective conveying of application of knowledge, two critical outcomes expected from the course. A discussion with students led to the finding that students did not know how to engage the audience during presentation. Therefore, based on his experience, Markulis (2007) recommends using facilitated presentation for assessing student learning outcomes (especially ones dealing with communication and engagement skills).

Bontis, Hardie, and Serenko (2009), in their comparative study across multiple evaluation formats that predict student grades, found that multiple choice tests were the best predictor of student achievement. However, among experiential approaches, case-based exams and class participation were found to be good indicators of student grades with cases being very close to multiple choice tests in their predictive ability.

Simulations and Games

Simulations and games (terms used interchangeably) have been in use since 3000 BC when they were used to train field officers on battle strategy. Over time, these war games, combined with operational research, technology, and experiential educational theory, resulted in the early movement toward the use of business games in education. The combination of these forces resulted in creating experiential environments that enabled institutions of higher education to adopt the pedagogy into their curriculum. The first known use of business games in higher education was pioneered by the American Management Association in the mid-1950s (Wolfe, 1993). Thus, simulation and games represent the oldest experiential form of training and probably the second oldest experiential pedagogy in business education after case studies.

Business games can take on many forms based on a variety of factors including the number of players involved, the number of decision variables involved, and the amount of feedback to the players. Initial games in business were primarily simple, uncomplicated, and hand scored. With the access to technology in the form of computers, the complexity of these games has exponentially increased. Business games can deal with individual functions (like marketing or operations), concept games that focus on a particular concept (sales force management or managing promotions), or a total enterprise simulation (that holistically deal with complex management of all functions of an organization) (Faria, Hutchinson, Wellington, & Gold, 2009).

Based on their review of 40 years of publications in Simulation and Gaming, Faria et al. (2009) identify nine themes as to why business simulations are used in education. These include: Experience, strategy, decision making, outcomes as per Bloom’s taxonomy, teamwork, motivation, theory application, involvement, and integration of ideas. Of these nine, in the last decade, the top five reasons why simulations and games are used in business education, in rank order, are—gaining experience, formulating strategy, outcomes as per Bloom’s taxonomy, decision-making skills and teamwork. This is a drastic shift from the 1970s when the primary focus of the games was to achieve learning outcomes as per Bloom’s taxonomy whereas in the 2000s, the primary reason was to provide experience (Faria et al., 2009, p. 479). This was also prompted by changes in accreditation standards which required institutions to demonstrate relevance and accountability of education, making the experience and focus on outcomes the key drivers for the use of simulations. Thus, simulations and games have an integral role in the experiential learning pedagogy.

To achieve these themes, educators have implemented the business games in the last decade by focusing primarily on interactivity, complexity, and teamwork (Faria et al., 2009, p. 481). These have been a result of great developments in personal computing as well as the development of the Worldwide Web that has made it possible to create interactive games that are complex and can be played by teams that are either co-located or spatially separated.

Ben-Zvi (2010), in his study on the use of simulations as teaching tools, found that utilization of simulation games helps students in achieving higher order cognitive processes (apply, analyze, evaluate, and create) across conceptual, procedural, and meta-cognitive knowledge dimensions. Self-reported scores from participants involved in the simulation indicated that students found the game to help in independent thinking and was intellectually challenging contributing to overall learning experience.

Anderson and Lawton (2009) state that the desired outcomes of using a simulation are learning (teach vocabulary and concepts, enhance retention of knowledge, demonstrate difficulty of executing concepts), attitudinal (provide common experience, engage students, develop their attitude toward the discipline), and behavioral (apply and implement concepts, interact with peers, practice and improve decision-making skills). Given the time commitment expected for simulations, they are inefficient approaches to teach factual knowledge, vocabulary, and other basic learning aspects. Games are also seen as less efficient and less effective in teaching specific concepts (such as product life cycle or political environment which are better taught through targeted cases).

Based on their review of 40 years of research covering simulations and other experiential learning approaches done by the Association of Business Simulations and Experiential Learning (ABSEL), Howard, Markulis, Strang, and Wixom (2006) found that about 57% of the studies reviewed assessed some combination of cognitive and affective outcomes whereas less than 1% of the studies assessed behavioral outcomes.

Given the varied nature of reasons for using simulations as a pedagogical tool, it is hard to develop a universal assessment tool to assess learning from simulations. Most simulation assessments have focused on affective learning (Anderson & Lawton, 2009) but have found no evidence that this perceived positive attitude for simulations was correlated with the students’ performance on the simulation. Further, they state that there has been less success in being able to measure behavioral outcomes as it has been extremely hard to demonstrate objectively that (a) simulations have resulted in the desired cognitive outcomes and (b) these changed cognitive outcomes resulted in the changed behaviors.

In an attempt at developing a framework for assessment in simulations using the four levels of Kirkpatrick’s framework of learning, Schumann, Anderson, Scott, and Lawton (2014) summarize that reaction (Level 1) can help evaluate both learning experience, as well as provide suggestions for improvement, and develop benchmarks for future experiences. This is usually achieved by administering some form of satisfaction surveys but is likely to be more effective and meaningful if conducted with a control group to address confounding factors.

Learning for knowledge, skills, and attitudes (Level 2) should involve measuring a change in the component of interest that can be directly ascribed to the simulation being used. A pre- and post-assessment using a control group is the best approach to assess if meaningful learning has been achieved by using simulations. Extant research, as reviewed by Schumann et al. (2014), has only demonstrated effectiveness of simulations relative to lower order learning. Not much assessment has been conducted pertaining to higher order outcomes, primarily due to lack of proper assessment instruments. To assess behavior (Level 3), a recommended approach is to conduct a longitudinal study to ensure that learning from simulations has resulted in changed behavior in future courses. Finally, the evaluation of results (Level 4) has not been properly addressed in the simulation literature. This is primarily because, except for corporate training programs, there is a dearth of specificity in identifying the relevance of results to be examined. A longitudinal study involving post-matriculation outcomes (for instance, job performance, salaries, promotion, or grades in other courses) could be a good way to assess whether simulations are helpful in achieving the desired results. However, in each of these cases, the key issue in assessment is the ability to isolate the effects of simulation and demonstrating the causality of simulations in the desired outcomes remains to be addressed.

Gosenpud and Washbush (1996) did not find any significant relationship between performance in simulations and learning. However, given the overwhelming anecdotal evidence indicating simulations lead to learning, they conducted a study to identify why some learn more than others and what variables help predict learning in simulation. Learning was measured as a pre- and post-assessment based on a test linked with simulation and learning was measured as the difference between post- and pre-assessments. Common sense variables (how well students understood and how simple the students perceived the simulation early on) correlated highly with learning. When they correlated learning with explicit goals set by students, they found that students expressing goals based on simulation metrics (financial etc.) learned more than students expressing goals based on grade and competition. However, the correlation between performance and learning was almost nil as in their previous studies.

In their review of literature assessing effectiveness of experiential learning, Gosen and Washbush (2004) state that studies concerned with simulation effectiveness have suffered criticism for lack of validity and generalizability. This stems from a lack of generally accepted theory and method to validate business simulations. Due to a lack of consistent research based on sound methodology across multiple simulations, validity of learning from simulations is questionable at best.

In reviewing 20 studies dealing with simulation assessment, Gosen and Washbush (2004) conclude that 14 studies focused on learning and 6 on attitudes. Ten of the reviewed studies used objective measures, whereas only one study focused on convergent validity. Thus, the validity of measures used to assess simulation effectiveness needs more research.

Supporting the previous views on lack of support for the relationship between simulation performance and learning, Teach and Patel (2007) suggest that even in simulations that allow for a broad-based formula to assess performance, profit overwhelms all other explanatory variables in the model making these broad-based assessment tools as ineffective as performance on a single metric to evaluate learning. They further assert that assessments in simulations are based on performance due to the assumption that students learn “just-in-time”. Thus, a better performance on a host of measurable metrics indicates that students learned the relevant concepts leading to better performance. However, contrary to this accepted view, Teach and Patel (2007) suggest that learning in simulation is often not “just-in-time”, but “just-a-little-late”. This means that students do not necessarily learn before a good performance result but are more likely to learn after a disappointing performance. Thus, any assessment using simulations should account for this learning and not just performance. In order to be able to better evaluate learning from simulations, Teach and Patel (2007) therefore suggest the following strategies to ensure adequate learning occurs through the use of simulations—identify and select simulations based on learning outcomes that are expected to be realized, clearly articulate and state to the students what these learning objectives are for the simulation (for instance, be able to understand the relationship between advertising and price or be able to evaluate the impact of cultural distance on rate of product adoption).

Teach (2018) suggests that instead of focusing on profits and other metrics of performance as an indicator of learning, the focus should be on metrics like measuring and analyzing errors in forecasting or the ability to operate within constraints of resources. He lists about 20 items that students participating in a simulation might learn (p. 57), many of which are non-cognitive/concept oriented and therefore need to be assessed based on the effort that students put in to master these areas through constant practice and application.

Cadotte and MacGuire (2013) suggest the use of a combination of assessment tools—a comprehensive business plan, stockholder report, and executive briefing using a rubric (p. 40)—to assess student learning from simulations. Given the time commitment required by the students and complexity of the simulation involved, they recommend a process of continuous feedback and assessment to ensure learning from the simulation. Using these tools, they were able to show that participants were able to demonstrate a higher level of understanding increasing from about 20% at the beginning of the simulation to 80% by the end of the simulation. They also found that this approach of teaching and assessment resulted in an increased confidence of students in making various functional decisions as well as team management skills.

Our discussion on assessment in simulation therefore seems to suggest that using a more traditional assessment tool like a test combined with specific goals linked to simulation performance can lead to a better assessment of learning rather than focusing primarily on performance of the simulation. Survey instruments to evaluate student experiences with the simulation and longitudinal studies dealing with transferability of skills across courses and domains are also viable approaches to evaluate learning from simulations.

Group Projects

Group projects are one of the components of a broader pedagogical approach called cooperative learning. The theoretical basis for this cooperative learning can be traced back to the 1930s and 1940s when Philosopher and Psychologist John Dewey sought to teach students by encouraging them to be active participants in the learning process by letting them work in small groups on topics of interest to them (Sharan, 2010). Cooperative group learning leads to student achievements as described in the elements that mediate cooperative learning (Johnson & Johnson, 2009).

Given that most business graduates are made to work in team-based environments after their graduation, group projects are seen as a critical component for achieving desired program outcomes in students. The cooperative learning arising out of working in teams to solve problems provides an experiential context for students to learn and apply their knowledge in problem-solving situations. Group projects gain even more importance as an experiential pedagogical approach due to their emphasis by employers and accrediting bodies (Braender & Naples, 2013).

Group projects are especially preferred as an experiential approach due to their higher degree of skill transferability compared to other approaches to learning. According to Gaumer, Cotleur, and Arnone (2012), group projects bring in a host of advantages to learners including opportunities to assume and demonstrate leadership, objective-oriented task organization, task delegation based on strengths and weaknesses of individual team members, ability to deal and manage conflict, ability to gather and analyze large volumes of data while distinguishing between relevant and irrelevant information related to project goals, ability to effectively communicate with and persuade team members, and solve problems using conceptual knowledge.

So and Brush (2008) found that collaborative learning, a key feature of group projects, resulted in increased satisfaction as well as social presence among students of a distance learning course. Schultz, Wilson, and Hess (2010), through their qualitative study with students, found five themes supporting group projects. These were better quality of deliverables, a richer and wider set of ideas, enhanced cognitive and social learning, shared (and reduced) workload, and reduced anxiety and stress due to complexity of the projects.

In assessing group project effectiveness in global virtual teams (GVTs), Taras et al. (2013) state that such collaborative learning group projects, in addition to the above stated benefits, also impart critical components unique to cross-cultural and global teams. These include cross-cultural communication, collaboration under temporal and spatial dispersion, and managing different work styles and team management skills in low technology contexts. They have also been shown to be especially valuable to developing and third world countries in giving a realistic sense of cross-cultural experience without the accompanying cost of international travel.

However, studies have also shown that group projects have their own set of unique challenges that sometimes make them less appealing to students. Gaumer et al. (2012) state that social loafing and uniform grading across the group irrespective of effort are some of the major limitations. Schultz et al.’s (2010) study identified three themes that would deter students from preferring group projects—grade reciprocity (dependence on peers for personal grades), social loafing and freeriding, and challenging and conflicting schedules.

Given the multitude of benefits of group projects for students, these have been widely adopted and used across higher education institutions worldwide. However, given the extensive cognitive, affective, and behavioral outcomes expected from group projects as well as their varied implementation, the tools being used to assess student learning from these group projects have also been varied.

Peer evaluation has been recommended as a key assessment tool to address student concerns of social loafing and freeriding problems (Braender & Naples, 2013; Figl, 2010; Gaumer et al., 2012; Pathak, 2001; Schultz et al., 2010; Taras et al., 2013). Other commonly used approaches to evaluate group projects involve self-reported surveys (Taras et al., 2013), oral presentations (Anderson & Schiano, 2014; Cadotte & MacGuire, 2013), written reports (Anderson & Schiano, 2014; Taras et al., 2013), and changes in perceptions and behaviors (Taras et al., 2013).

Whenever a group work is being evaluated using written or oral presentations, it is recommended that a clear, easy to understand, and objective rubric be developed and shared with students to create transparency in grading of the work (Anderson & Schiano, 2014; Cadotte & MacGuire, 2013; Markulis, 2007).

A unique challenge facing evaluation of group projects and any group work in general is that the final work product is normally a group submission. However, evaluation and grading of students is performed for individual students. So a fair and transparent way of assessing and allocating a grade to individual students based on their specific contribution to the project becomes critical to develop and maintain effective team environments. Peer evaluation, as discussed above, is a common way to evaluate individual performance and it is often weighted in some manner to provide a score reflective of individual effort and contribution by each member. Braender and Naples (2013) suggest that when using peer evaluations for assigning individual grades on group work, a process of continuous evaluation by peers throughout the length of the project should be followed. In such a process, the initial peer evaluation should be only used as information to help individual members correct their behaviors and bring them in line with group expectation. Relying on a single end of the project, peer evaluation deprives students from intermediate feedback and opportunities to learn and modify behavior thereby undermining the learning process. Whenever peer evaluations are used for evaluating team work, care should be taken to gather evaluations related to the team as well as individual members. Evaluation of individual members provides a measure of individual competencies and performance (cognitive, affective, and behavioral). In addition, peer evaluation instruments should also seek evaluation of the team by each individual member. Such measures would help the instructor in identifying conflict within the team (task, relationship, or process related) and satisfaction with the team. Measures of conflict can easily help the instructor to identify sources of problems and address any concerns before they start affecting team dynamics and, therefore, learning. They also suggest other strategies for the fair evaluation of individual members such as timely completion of tasks and assignments and cross-validation through written tests and oral presentations (also see Taras et al., 2013).

The problem with the above approaches in evaluating individual performance in a group setting is that most of the measures are subjective. Tests are seen as more objective measures of learning, but the performance on tests may not be a clear indication of learning through collaboration/project unless the test is specifically designed to assess knowledge related to the group project. A more objective measure was suggested by Braender and Naples (2013), who used analytics data from their learning management system (LMS). The LMS allowed the researchers to create a team space where each student’s activity logs were recorded. This provides the number of times students logged in to work on their team project, amount of time spent in the team space, and nature of work performed. Their study shows that this objective data not only correlated highly with project grades of teams, but also helped address issues of social loafing, inadequate effort, and conflicts within teams. This helped them address any major team conflicts in a timely manner to ensure smooth performance in the teams.

Internships

The ultimate goal for most students pursuing college education is to find a job that offers good benefits and career growth opportunities. Historically, internships can be traced back to eleventh- and twelfth-century guilds where professional apprenticeship was used by master craftsmen to ease their work. Later, in the late nineteenth and early twentieth century, the field of medicine used the practical experience of apprenticeship into training. It was more formally inducted within other disciplines and became a part of mainstream education only in the 1960s (Spradlin, 2009). Even today, the term “intern”, commonly used across disciplines, comes primarily from the field of medicine to describe an individual who has a degree but without a license to practice (Haire & Oloffson, 2009).

Internships have become a common practice in most business schools both at the undergraduate and graduate levels. This is reinforced by accrediting agencies (AACSB, Accreditation Council for Business Schools and Programs (ACBSP), etc.), who expect students to have applied knowledge and skills related to business problems as a requirement to accreditation. Other common terms used to describe these experiential exercises include service learning, practicum, and apprenticeships. In this chapter, we use the term “internships” in a more generic sense to include all these different variations of the concept.

Internships help achieve higher order learning outcomes and are known to provide numerous other benefits including enhanced understanding of content knowledge leading to higher academic achievement, opportunity to develop network skills and seek gainful long-term employment, apply learning from classroom into real-world problem solving by integrating theory and practice, providing a set of realistic expectations in the business world, improved social and communication skills, cultural awareness (both organizational and interpersonal), increased positive attitude about self, and civic engagement (Bukaliya, 2012; Celio, Durlak, & Dymnicki, 2011; Knouse & Fontenot, 2008; Simons et al., 2012; Warren, 2012).

A most common way of assessing the success of an internship program is through assessment of participant and supervisor satisfaction through self-reported surveys. Simons et al. (2012) used a multimethod approach combining qualitative and quantitative measures as well as student, faculty, and supervisor feedback to assess the extent of learning in a practicum experience. Their study, through a pre- and post-assessment using standardized scales, revealed that students going through the practicum experience demonstrated a greater extent of personal, civic, and professional development. The analysis of supervisor feedback indicated that students going through a practicum demonstrated greater levels of knowledge, skills, and attitudes relevant to the work profile.

Through meta-analysis involving 62 programs engaged in service learning, Celio et al. (2011) found that service learning resulted in significant effect on five areas of student outcomes—attitude toward self, attitude toward school and learning, civic engagement, social skills, and academic achievement. The mean effect size was the highest for academic achievement followed by social skills, demonstrating that this experiential learning approach does achieve the primary objective of the exercise.

In another meta-analysis, Warren (2012) used 11 studies across disciplines which assessed student learning outcomes through an experimental design. His analysis indicates that both self-reported measures and objective measures (test or assignment scores) were significantly and positively related to student learning outcomes. Although self-reported measures of learning had greater effect on learning outcomes than objective measures, there was no statistical difference between the self-reported and concrete measures. This shows that both self-reported measures and objective measures are equally valid means of assessing outcomes in internships.

Using an ordered probit and linear regression analysis on students in the United Kingdom participating in internships, Mandilaras (2004) found that students undergoing internships in economics demonstrated higher academic achievement. Academic achievement was measured as class of degree (for the ordered probit model) and grades received in the last two years (for the regression model). Results were consistent in both models indicating that internships not only have immediate, but also far-reaching impact (as they end up getting better marks overall or a higher class of degree). This indicates that longitudinal or long-term horizon measures of assessment may also be valid approaches to assess outcomes of internships.

Some of the specific tools used to evaluate student learning outcomes of internships include—oral presentations on the content and experience of the internship, written reports submitted and evaluated by both faculty and internship supervisor, use of daily journals or reflection papers from students on the experience and lessons learned, a portfolio that includes an analytical and reflective description of project(s) performed, the mapping of the experience to stated outcomes and a resume (Pittenger, 2018). Surveys and feedback from employers normally focus on students’ competence on knowledge (general business concepts, functional concepts relevant to the internship experience, global issues), skills (written and oral, teamwork, leadership, time management, other technical), and attitudes (desire to learn, ethical approach to problem solving, dependability) (Pittenger, 2018; Simons et al., 2012).

With most institutions requiring their students to engage in internships in their junior/senior year of the undergraduate degree, internships are often used to assess not only student learning outcomes for the particular course/experience but also assurance of learning at the program or school level for the purposes of accreditation. Normally in such cases, assessment is conducted by an individual/team who is exclusively dealing with assessment tasks and would assess students’ outcomes in line with the program-level desired outcomes.

Study Abroad Experiences/Field Trips

Study abroad programs can trace back their history to the early twelfth century. Although one might argue that an even older precedent exists for study abroad when Aristotle, born in Macedonia, studied in Greece, the first recorded study abroad pioneer is Emo of Friesland who traveled from Northern Holland to study at Oxford University in 1190 (Lee, 2015). However, in a more formal way, the first recorded study abroad program in the United States was initiated by the University of Delaware in 1923 when Professor Kirkbride, an instructor in modern languages, proposed to send students from the University of Delaware to France for their junior year. Eight students participated in this first year-long experience (https://www1.udel.edu/global/studyabroad/information/brief_history.html).

Study abroad is one form of academic internationalization which has taken many other forms over the years including student exchange programs, inclusion of international students, joint and double degree programs, visiting faculty/scholars, international curriculum, student clubs, international campus events, community-based partnerships and projects, collaborative research, international conferences, and seminars (Knight, 2004).

Study abroad programs have been used in a variety of time frames (semester-long or year-long programs, major abroad programs, and short-term study abroad involving two to four weeks of study). However, irrespective of implementation, all study abroad programs share a common goal that they aim to achieve through the experience—enhance cultural quotient (CQ) (understand their cultural values and biases). Other goals that have been used for study abroad programs include enhancements in students’ knowledge, developing skills, shaping attitudes, building confidence, developing a broader world view, career development, language learning, ability to pursue subjects/topics not available elsewhere, and enhancing creativity (Dwyer & Peters, 2004; Kinginger, 2011; Lee, Therriault, & Linderholm, 2012; Nolan & Kurthakoti, 2017; Sachau, Brasher, & Fee, 2010).

Deardorff (2006) suggests a list of specific intercultural competencies (Table 2, pp. 249–250) that are relevant in assessing the successful internationalization of academic programs and could be used as a guideline to determine learning outcomes from a study abroad program. Continuing further, she provides a list of commonly accepted approaches as agreed by scholars and faculty. This includes use of tools like case studies, interviews, self-reported surveys, observations by the host culture, self-reflection, use of journals and narrative diaries. Further, talking specifically about study abroad programs, Deardorff (2011) suggests the use of pre/post testing, program satisfaction surveys, self-perspective inventory, direct evidence using critical reflection papers and capstone projects. An interesting finding of Deardorff’s (2006) study is that, in spite of its simplicity in implementation, a majority of scholars are not in favor of using pre/post assessments as these tend to rely heavily on self-reported scores and are likely to be affected by other factors. Similarly, they also believed that using observations alone was not a good approach to assess cultural competencies as they tend to be subjective in nature. Thus a combination of the approaches discussed before seems like the best way to assess learning from a study abroad program, especially as it pertains to culturalcompetence.

Scholars have also used standardized scales (Earley & Mosakowski, 2004; Van Dyne, Ang, & Koh, 2008) to study the enhancement in cultural intelligence among students studying abroad and have found that students who engage in study abroad experiences, even as short as eight days, demonstrate a significant improvement across all dimensions of cultural intelligence (Nolan & Kurthakoti, 2017).

Chieffo and Griffiths (2004) developed a 21-item scale covering four dimensions of global awareness (intercultural awareness, functional knowledge, global interdependence, and personal growth and development). On comparing scores across the 21 items for students engaged in short-term study abroad versus those who did not study abroad, they found that study abroad students demonstrated a higher level of global awareness across all the four dimensions indicating that even short study abroad programs lasting a month can have significant benefit to the students in terms of enhancing their global awareness.

Matsumoto and Hwang (2013) evaluated ten available scales to assess cultural intelligence and, in conclusion, demonstrated the scales that have higher reliability and validity across varying conditions. Accordingly, they recommend the use of the CQ or cultural quotient (Van Dyne et al., 2008), intercultural adjustment potential scale (ICAPS) (Matsumoto et al., 2001) or the multicultural personality questionnaire (MPQ) (van der Zee & van Oudenhoven, 2000) to assess cultural intelligence and serve as a tool for assessment for the multidimensional cultural intelligence construct.

Table 3.1 briefly summarizes the key components of the preceding discussion on the assessment of various experiential learning approaches, their learning outcomes, and accompanying assessment techniques.

Table 3.1 Summary of learning outcomes and assessment tools used in common EL approaches

Discussion and Conclusion

Experiential learning is an integral part of the curriculum in many business and international business programs at many U.S. colleges and universities. When developing specific experiential learning scenarios for students, professors and instructors must be guided by the outcomes of such experiences and not by convenience or the desire to emulate competitor programs to ensure that students are being given an optimal opportunity to build a wide range of skill sets that will benefit them in the academic setting as well as in future employment situations. Such a process includes identifying those learning outcomes, using the outcomes to develop learning objectives and determining assessment techniques that will measure how effectively those outcomes have been achieved.

In order to effectively assess the outcomes of experiential learning, it is critical that the learning objectives intended to be achieved by EL be specified in advance. This well-defined set of objectives will then lend itself to assessment by the instructor by the use of appropriate assessment tool(s).

Yet, developing such objectives can be difficult. Philbrick, Maryott, and Magnuson (2017) note that skills desired in graduating students by employers are not always uniform nor clear. The researchers, unable to find academic studies offering clear objectives to incorporate into experiential learning projects, conducted focus groups with human resource professionals to generate business discipline (marketing, human resources, accounting, finance, business information systems, and supply chain) specific objectives for such usage.

Skills to focus upon in experiential learning which are desired by employers can also be identified through business-based organizations such as Payscale (an online compensation and benefits firm) and in the “popular” business press such as Forbes Magazine, but one must recognize that sample size and rigor in those sources are often unclear. In areas like accounting and actuarial sciences, there exist specific standards by external agencies (American Institute of Certified Public Accountants (AICPA), Society of Actuaries (SOA)) from which students are expected to obtain certification in order to perform specific tasks associated with their profession. In such cases, these standards provided by the external agencies serve as the guidelines on what skills are to be assessed by the EL approach (Beard, 2007).

One core “difficulty” in delineating experiential learning objectives is that the key skills to be assessed are often “soft” (team-based, interpersonal, and leadership elements) in orientation which requires definition, calibration, and assessment applications to be carefully detailed and integrated for the chosen experiential learning approach. Careful attention to the desired EL outcomes coupled with a recognition of student learning styles best addressed by each experiential learning process will provide for the alignment of objectives with learning evaluation.

David Kolb’s learning styles inventory serves as a useful tool in meeting this challenge. The inventory looks at how abstract concepts are acquired and utilized by the learner. Kolb (2014) examines two core dimensions in acquisition and utilization—the nature of conceptualization (abstract or concrete) and the nature of utilization (reflective observation or active experimentation). These two dimensions produce four learning styles (convergent, divergent, assimilative, and adaptive or accommodating).

To ensure that the intended outcomes of experiential learning scenarios are aligned with students’ learning styles, each of the EL approaches examined in this chapter can be assessed by Kolb’s four styles of learning (convergent, divergent, assimilative, and adaptive). The convergent learning style is one of abstract conceptualization and active experimentation. Games and simulations fit this style quite effectively as both abstract functional concepts (such as the nature of the “best” marketing mix for an identified market) can be developed via the trial and error involved in each round of a marketing simulation. Aspects of an internship, particularly one where students can see the impact of multiple attempts at manipulating elements of a particular concept also are examples of this style.

The divergent style (concrete experience and reflective observation) is best exemplified in long-term group projects, internships, and study abroad settings where the length of the experience allows the student to see actions/decisions and consequences in a more holistic fashion. The same two settings (study abroad and internships) can also address the adaptive learning styles (concrete experience and active experimentation) of some students. Again, the length of these EL experiences (12–16 weeks at a time) provides a number of varying opportunities for specific actions and decisions to be taken by students along with reflection to occur on the consequences of said action. Then, it allows for follow-up via another action or decision with such follow-up being reflective or active in nature depending on the circumstances.

Finally, the assimilative learning style (abstract conceptualization and reflective observation) is readily matched to the case study method or more short-term, single subject-based group projects. In both cases, these experiential learning approaches require the students to examine issues and make recommendations on their resolution but in neither case are the suggestions actually implemented so the resulting consequences can guide future behavior. Table 3.2 examines each experiential learning outcome in relation to Kolb’s learning styles.

Table 3.2 Aligning EL outcomes and Kolb’s learning cycle

In a similar approach, Good, Boyas, and Klein (2019) propose a model to effectively create assessment tools that connect the numerous activities possible within EL with the varied learning outcomes associated with courses that use EL as a pedagogical approach. Here, they examine specific courses in two disciplines, human resources and accounting, noting the objectives of each course, the experiential learning activity undertaken, and the assessment tools used for that activity as well as whether the evaluation was a summative or formative one.

In light of these discussions, we believe that a good approach to develop assessments for experiential learning should be based on a clear understanding of the learning outcomes for the course/program as well as the specific objectives of the experiential activity.

As illustrated in Fig. 3.1, once the objectives and the student learning styles are paired with the EL approach (as described in Tables 3.1 and 3.2), the final step is to examine the method of assessment to be used to ensure that the intended learning outcomes are achieved. In examining the research on specific EL types summarized in Table 3.1, the core outcomes can be classified into three broad categories: Knowledge, skills, and attitudes.

Fig. 3.1
A cycle diagram depicts the learning outcomes, assessment type, and activity objectives with their interconnections.

A model for developing assessment for experiential learning pedagogy (illustration by authors)

Knowledge acquisition is the primary focus of education in general and college business curricula in particular. Such knowledge can be broad in orientation covering conceptual and theoretical aspects (e.g. what are the key elements in capitalist theory and how do they differ from socialist theory?), specific in its focus covering a particular domain or function (e.g. accounting versus human resources) or cross-cultural (e.g. how does interpersonal space differ between individuals in the U.S. vs. Brazil?). For conceptual, theoretical and cross-cultural knowledge acquisition, internships, and study abroad experiences allow the student the time and variability that facilitates such learning. As the research shows, multiple assessment approaches including short-term self-report journals and longer term self-reflection and interviews will provide evidence of the learning outcomes obtained.

For more functional or domain knowledge acquisition, case studies, games and simulations or projects are viable options to consider. Learning outcomes specific to the focus function can be best determined by written reports, oral presentations, self-report surveys, peer evaluation, and even tests and assignments. These same three approaches (case studies, games and simulations, and projects) facilitate skill acquisition (decision making, interpersonal, problem solving).

Finally, attitudes are most likely impacted by internships, projects, study abroad programs, and games and simulations. These four EL approaches tend to support decision making or action taking by students within the setting and allow the students, often because of the length of the activity, the opportunity to see the results of their actions. Such opportunity for viewing consequences is often the nexus of a change in attitude such as learning from mistakes, greater cultural sensitivity, or enhanced ethical awareness.

In conclusion, this chapter represents one of the first attempts to holistically view the various approaches commonly used to assess experiential learning outcomes and critically examine their relative merits and limitations. We believe that this evaluation of the various assessment approaches will enable practitioners of EL to make informed decision about the choice of tools to be used for assessing experiential learning while taking into account student learning styles.