Skip to main content

Lessons from the US Experiences: Measuring Learning Outcomes and the Role of JCIRP for Assessing Student Learning

  • Chapter
  • First Online:
Measuring Quality of Undergraduate Education in Japan
  • 915 Accesses

Abstract

This chapter first demonstrates how we have tackled with the development of student survey systems in Japan, what problems are to be focused in terms of assessment of learning outcome and what we have to do in the next stage for the development of our survey system. Second, by using the Japanese Freshman Survey (JFS) 2008 data, the method used to explore the complex structure of college student analysis is indicated, that is by analysis of JFS2008 data by a multilevel model. Multilevel model analysis is identified as mixed-effects model or hierarchical linear model (HLM). This model can be applied to nested or hierarchical data in which individuals belong to various types of groups. In this case, college students belong to different gender group, different school year groups, and various colleges at different level, and so on. Using multilevel model analysis, this study shows that college student satisfaction is caused by personal factor rather than college factors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    More standardized tests also exist within other fields in Japan. For example, the Common Achievement Tests Organization (CATO) developed standardized tests within the fields of medicine and dentistry beginning in 2002, and also started the implementation of the standardized tests in 2006. The tests consist of two parts: computer-based testing for fundamental knowledge, and objective structured clinical examination measuring fundamental skills and attitude. The field of veterinary medicine also introduces a standardized test similar to the Computer Based Testing (CBT) and Objective Structured Clinical Examination (OSCE) used by CATO.

  2. 2.

    The classification of direct and indirect assessments is based on previous studies, as well as my student-self-survey data.

  3. 3.

    Astin (1993a, pp. 291–305) originally explains this concept in “What matters in College?”

  4. 4.

    Anaya (1999, pp. 499–526) explains this notion in “College Impact on Student Learning: Comparing the Use of Self-Reported Gains, Standardized Test Scores, and College Grades.”

  5. 5.

    MAPP is the abbreviation of the Measure of Academic Proficiency and Progress, and CAAP is the abbreviation of the Collegiate Assessment of Academic Proficiency.

  6. 6.

    Based on the factor loading, we get the four factors and named each as “Classical knowledge of Arts and Sciences,” “Modern practical knowledge,” “Modern knowledge of Arts and Sciences,” and “Basic knowledge and skills.” In the table, we only show the three factors except “Basic knowledge and skills.”

  7. 7.

    Based on the analysis of student category, we categorized the positive type of students and negative type of student. Positive types of students are smoothly adapted to the university culture and negative types of students have difficulties to adaptation to the university.

  8. 8.

    The number shows deviation for admission. Less than 39 is 17 %, 44–49 is 41.6 %, 49–52.5 is 29.1 %, and over 52.5 is 12.3 %.

References

  • Anaya, G. (1999). College impact on student learning: Comparing the use of self-reported gains, standardized test scores, and college grades. Research in Higher Education, 40(5), 499–526.

    Article  Google Scholar 

  • Arimoto, A., & Ehara, T. (Eds.). (1996). International comparison of the academic profession. Tokyo: Tamagawa University Press (in Japanese).

    Google Scholar 

  • Astin, A. W. (1993a). What matters in college? Liberal Education, 79(4), 291–305.

    Google Scholar 

  • Astin, A. W. (1993b). What matters in college?: Four critical years revisited. San Francisco: Jossey-Bass.

    Google Scholar 

  • Banta, T. W. (2007). Assessing student achievement in general education. San Francisco: Jossey-Bass.

    Google Scholar 

  • Borden, V. M., & Young, J. M. (2007). Measurement validity and accountability for student learning. New Directions for Institutional Research Assessment Supplement, 2007, 19–37.

    Google Scholar 

  • Duncan, C., et al. (1998). Context, composition and heterogeneity: Using multilevel models in health research. Social Science & Medicine, 46(1), 97–117.

    Article  Google Scholar 

  • Ehara, T. (1998). Research and teaching: The dilemma from an international comparative perspective. Daigaku Ronshu, 28, 133–154.

    Google Scholar 

  • Gonyea, R. M. (2005). Self-reported data in institutional research: Review and recommendations. New Directions for Institutional Research, 127, 73–89. (Survey Research Emerging Issues).

    Article  Google Scholar 

  • McCormick, A. C. (2009). Toward a nuanced view of institutional quality. NSSE promoting engagement for all students: The imperative to look within 2008 results, pp. 6–8.

    Google Scholar 

  • Pascarella, E. T. (1985). College environmental influences on learning and cognitive development; A critical review and synthesis. In J. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 1, pp. 1–64). New York: Kluwer Academic Publishers.

    Google Scholar 

  • Pascarella, E. T., & Terenzini, P. T. (2005). How college affects students. San Francisco: Jossey-Bass.

    Google Scholar 

  • Raudenbush, S. W., & Bryk, A. S. (2002). Linear models: Applications and data analysis methods (2nd ed.). London: Sage Publications.

    Google Scholar 

  • Shavelson, R. J. (2010). Measuring college learning responsibly: Accountability in a new era. San Francisco: Stanford University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Reiko Yamada .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media Singapore

About this chapter

Cite this chapter

Yamada, R. (2014). Lessons from the US Experiences: Measuring Learning Outcomes and the Role of JCIRP for Assessing Student Learning. In: Yamada, R. (eds) Measuring Quality of Undergraduate Education in Japan. Springer, Singapore. https://doi.org/10.1007/978-981-4585-81-1_5

Download citation

Publish with us

Policies and ethics