American Journal of Community Psychology

, Volume 45, Issue 3, pp 394–404

Investing in Success: Key Strategies for Building Quality in After-School Programs

Authors

  • Jessica Sheldon
    • Public/Private Ventures
    • Public/Private Ventures
  • Leigh Hopkins
    • Public/Private Ventures
  • Jean Baldwin Grossman
    • Public/Private Ventures
Original Paper

DOI: 10.1007/s10464-010-9296-y

Cite this article as:
Sheldon, J., Arbreton, A., Hopkins, L. et al. Am J Community Psychol (2010) 45: 394. doi:10.1007/s10464-010-9296-y

Abstract

This paper examines the relation between the implementation quality of after-school literacy activities and student reading gains. The data are from an evaluation of a multi-site after-school program in California in which continuous program quality improvement strategies were implemented to improve the delivery of a new balanced literacy program. Strategies included: (1) targeted staff training throughout the year, (2) regular observations and coaching of staff, and (3) the use of data to measure progress. Programs struggled to successfully implement these strategies early in the initiative, but gradually improved the quality and consistency of their use. Program quality, as measured through observations, also increased. Results suggested that the size of student reading gains were positively correlated with the quality of literacy programming provided by each instructor.

Keywords

After schoolActivity qualityContinuous program improvementLiteracyReading gains

Introduction

Recent years have witnessed an increase in both funding for after-school programming in the United States and scrutiny of its quality and results. Historically, after-school programs served primarily as a form of child care or as places where children could participate in new, enriching activities (Halpern 2005). Today, however, there are increasing pressures on after-school programs to expand their scope and to measurably impact participants’ academic achievement. These changes are partially a response to the accountability movement articulated in governmental policies like the No Child Left Behind (NCLB) Act of 2001 that monetarily penalizes schools if their students do not make adequate annual yearly progress 3 years in a row.

With these shifts, a field that was previously heralded for giving children additional enrichment and developmental opportunities is now being held increasingly accountable for providing academic support. Thus, many after-school programs are adding or strengthening their academic components. Some are including more academic enrichment activities, such as Fun Science; some are adding tutoring or introducing academic curriculum for the first time. Federally funded after-school programs such as the 21st Century Community Learning Centers (CCLCs) and others supported by the Supplemental Educational Services provision of the NCLB require that academic improvement be a key goal of programming. Statewide education initiatives in populous states such as California and Massachusetts also feature after-school programs as a critical component of their strategy to increase student achievement.

Many questions remain about the role after-school programs can play in improving children’s academic achievement. Early studies found that many after-school programs took an informal, sporadic approach to academic programming or limited their academic component to help with homework, an activity that is often poorly implemented and that research has not linked to measurable academic gains (Dynarski et al. 2003; Spielberger and Halpern 2002). An extensive study of Massachusetts after-school programs found that most offered positive staff-youth relationships and safe environments, but that staff tended not to facilitate learning or engage with youth in homework or academics (Miller 2005).

Even more limited is research that demonstrates improvement in reading skills in an after-school setting. A 2-year evaluation of 21st CCLCs found no impact on reading proficiency for elementary or middle school students either year (James-Burdumy et al. 2005). An evaluation of The After School Corporation’s after-school programs found improvements for math after 2 years of participation but not for reading (Reisner et al. 2004). A literature review of 20 studies of literacy-oriented after-school programs found that only seven revealed statistically significant academic improvements among participants (Britsch et al. 2005). Two of these programs offered an hour or more of tutoring twice a week, two had a literature-based curriculum and two combined literature-based activities with direct instruction. One (San Diego “6 to 6”) had a literacy curriculum but did not give enough information for it to be categorized. Of the 13 programs that did not have significant impacts, nine had academic enrichment components and four concentrated on homework help or tutoring.

While this is quite sobering, there are hints in the literature that if after-school programs can deliver their contents well, they are more likely to have positive impacts. For example, the seven of 20 programs with positive significant impacts were all mature, well-implemented programs (such as LA’s BEST, 5th Dimension) or modeled on a research-based curriculum that was delivered by teachers and staff with education degrees. In a correlational study, Miller (2005) found that when observers rated after-school programs as being well-paced and organized, they also rated youth engagement as higher. Youth engagement was positively correlated with youth outcomes (namely, relationships with adults and peer, homework, improved behavior and improved initiative). In fact, program quality explained 27–47% of variance in youth outcomes (depending on the outcome). Similarly, correlational analysis by Grossman et al. (2007) found that staff and program quality characteristics related to youth’s engagement and perceived levels of learning.

In particular, when programs provide youth with supportive adult and peer relationships, they are more engaged and in turn achieve better outcomes. Mahoney et al. (2007) demonstrated that after-school program engagement is linked to youth’s intrinsic enjoyment of learning and self-efficacy and that engagement is related to the provision of good structure in the programs. Mahoney et al. (2005) also found that the more engaged youth experienced better outcomes than their less engaged peers in an academically-oriented after-school program.

Vandell et al. (2007) examined the effects of high-quality after-school programs on academic and social outcomes. Programs were included in the sample only if they had many of the features Eccles and Gootman (2002) identified as key characteristics of strong developmental settings—such as providing supportive adult and peer relationships, offering a rich, varied and structured set of learning opportunities, and maintaining order without imposing controls that limited student learning opportunities. Participants that attended these types of programs 2–3 days a week did better than less frequent participants from the same schools on math test scores, work habits, and social skills. They also participated in fewer risky activities (e.g., fighting, drinking). These results differ dramatically from the less quality-oriented programs evaluated in the 21st Century Community Learning Centers evaluation (Dynarski et al. 2003; James-Burdumy et al. 2005). These differences may be due in part to the self-selection bias inherent in the Vandell et al. (2007) research design with better students more frequently attending after-school programs, however, they may also be due to the higher quality of the programs’ activities.

If Dynarski et al. (2003) and Spielberger and Halpern’s (2002) findings that most academic after-school programs are poorly implemented are true, yet better implemented programs do have positive impacts, the issue is how to create a well-delivered, high-quality, engaging program. For many people, working in an after-school program part-time is just a stepping stone to another job. So how in an environment of high staff turnover does one improve program quality? Research shows that the traditional approach of one-time beginning-of-the-year staff training sessions rarely make lasting impacts on staff members’ skill development or on program quality (Blau 1997; Clarke-Stewart et al. 2002; DuBois et al. 2002). Interestingly, the Massachusetts After-School Research Study found that the number of training hours staff received was not correlated to most measures of program quality, suggesting that quantity of training alone may not impact quality (Miller 2005). A newer training approach is coaching, where group sessions are replaced by or supplemented with one-on-one modeling and support. Although a promising training technique, coaching is just starting to receive attention from researchers (Neufeld and Roper 2003; Russo 2004). A recent literature review cited no evidence supporting the relation between coaching and academic outcomes (Poglinco et al. 2003). Since that review, Boston’s After-School Literacy Coaching Initiative provided 1 or 2 year-long structured coaching programs for after-school staff to help them provide literacy instruction. Miller et al. (2006) found that most staff felt more comfortable and skilled leading literacy activities such as read alouds and felt their students’ literacy skills improved over the course of the year. However, surveys of youth at participating sites showed little change in their frequency or enjoyment of independent reading.

The current paper contributes to the growing discussion of whether the quality of academic programming in after-school settings can be improved and what effect this might have on youth outcomes by examining CORAL (Communities Organizing Resources to Advance Learning), a multi-site academically-oriented after-school initiative. CORAL was launched in 1999—in Fresno, Long Beach, Pasadena, Sacramento, and San Jose, California— with the goal of improving the general academic achievement of its participants. Each CORAL city established after-school programs at multiple sites at both school-based and community-based locations, serving primarily elementary-school-age youth. Programs were open 4 or 5 days per week, 3 or 4 hours per day. In 2003, an interim report on CORAL’s outcomes suggested that the cities were struggling with issues related to quality of implementation. With a few exceptions, academic programming, which consisted of “homework help” and some enrichment activities, was of relatively low quality (Hebbeler et al. 2003). Disappointed in these preliminary results, in the fall of 2004 all CORAL sites made improving reading skills their primary focus. A 2-year evaluation was also initiated at this point to document the implementation and outcomes of the new approach (Arbreton et al. 2008).

The sites implemented a semi-structured “balanced literacy” curriculum, comprising about 30% of total program time. As part of this curriculum, youth (a) were exposed to “read alouds” (oral reading sessions that allowed students to hear examples of fluent reading), (b) talked about books, (c) practiced writing, (d) learned phonetics and other reading strategies, (e) were introduced to new vocabulary, and (f) spent time reading books (of their choice) at a level at which they could read fluently and with high comprehension. The remainder of the CORAL after-school programming consisted of a variety of enrichment, academic, and physical activities. Unlike the literacy activities, which were based on a similar model in all CORAL sites, the enrichment offerings varied widely. They ranged from formal curricula in topics such as science or drumming to recess and board games. Given the current interest in the potential of after-school programs to improve academic performance, the present study examined how the quality improvement effort affected literacy programming and outcomes.

With literacy as the primary focus, site coordinators were provided with a start-of-the-year training on the new model. Senior staff or consultants provided 1- or 2-day training sessions in literacy instruction in each of the CORAL cities. However, after the initial rush of program start-up, senior staff realized that their training efforts were not leading to the type of programming they had hoped for. The activity leaders, inexperienced at leading literacy activities, were not implementing the strategies as planned. Over the course of the 2004–2005 school year, senior staff attempted to address areas of weak programming by implementing a range of training strategies, some of which appeared to have greater impacts on program quality than others. Through this process of trial and error, by the start of the second year three successful training strategies were identified and incorporated into an ongoing, extensive system referred to here as a “continuous program quality improvement (CQI) cycle.” The cycle consisted of three steps: (1) targeted staff trainings throughout the year, (2) regular onsite observations and coaching of staff members, and (3) collection and analysis of data in order to measure progress and inform future training efforts (see Fig. 1).
https://static-content.springer.com/image/art%3A10.1007%2Fs10464-010-9296-y/MediaObjects/10464_2010_9296_Fig1_HTML.gif
Fig. 1

A cycle of continuous program improvement. A successful system of continuous program improvement begins by setting program goals and hiring a senior staff person charged with improving program quality. This lays the foundation for three repeating steps that complement each other. a Senior staff may be program directors or literacy directors. b At least monthly

The overarching goal of this paper is to advance the after-school field’s thinking about what it takes to improve the delivery quality of academically-oriented after-school activities. In the next sections, we present the characteristics of the sites, youth and staff who were part of the study, as well as the CQI cycle that was implemented in CORAL. We describe the measures and analyses we used to examine program quality and how implementation quality related to youth outcomes. We hypothesized that participants would gain more in the second year of programming than the first when we expected implementation quality to be lower. We also expected that children in classes with higher quality ratings would improve their reading skills more than those children who participated in lower quality activities.

Method

Participating Sites

Initially, four to five program sites in each of the five CORAL cities were selected for intensive study. We did not include new sites in the study because their general implementation immaturity might confound our results. Thus, selected sites had been in operation for at least 1 year before the study began. A total of 23 sites were chosen in Year One (school year 2004–2005). Three sites were dropped from data collection in Year Two (school year 2005–2006) because CORAL programming was not being offered at those sites any longer; one site was added because children from a site that was no longer offering CORAL were moving to that site along with the site’s coordinator.

Participants

In Year One, a randomly selected sample of 520 third- and fourth-grade youth who enrolled in the selected sites during Fall 2004 (out of 835) were included in the study. Of those 520 children, 381 (73%) were included in the final Year One sample because they had baseline and follow-up reading assessments. There were no significant differences between those assessed at both time points and those assessed only at baseline in terms of demographics or initial reading levels.

We selected third- and fourth-grade children for several reasons. First, if they are not adequate readers by this point they are at great risk of falling seriously behind (Gee 1999). Second, in most schools, there is a fundamental change in classroom practice around reading in fourth grade—when teachers no longer spend as much time on the techniques of learning to read but expect children to use their reading skills to explore and understand diverse texts (Cummins 2003). Finally, enrollment records indicated a large proportion of the children served by CORAL were in these grades.

In Year Two, the sample participants comprised fourth- and fifth-graders, including those from the initial cohort who continued to participate in CORAL, along with additional fourth- and fifth-graders at the CORAL sites who were not surveyed in Year One. Additional program participants were included in Year Two of the study to increase the sample size and counter attrition rates from the first year of the study. Of the 411 children who began the study in Year Two, 368 (90%) also completed a follow-up assessment and were included in the final Year Two sample. The 10% of youth without follow-up assessments differed from those with both data points only on the California Standards Test English Language Arts scores (a third of a standard deviation higher than the 90% with both assessments); there were no significant demographic differences between the groups. Participants in Year One and Year Two of the study were comparable in terms of demographics and reading levels (see Table 1).
Table 1

Characteristics of year one and year two study participants

Characteristic

Percent of sample

Year one

Year two

Sex

 Female

52.76

52.86

 Male

47.24

47.14

Race

 Latino/a

53.09

61.10

 African American

14.61

14.25

 Asian

6.46

8.22

 White

6.74

4.11

 Multiracial

10.96

7.12

 Other

8.15

5.21

English language status

 English language learners

50.91

58.12

Free lunch status

 Receive free- or reduced-price lunch

86.57

85.86

Baseline independent reading level

 At grade level

29.66

28.53

 One grade below grade level

21.26

17.93

 Two or more grades below grade level

49.08

53.53

California Standards Test, Spring 2004

 Proficient in English Language Arts

20.06

25.77

Participating CORAL Staff

The site-level staff who organized day-to-day operations consisted of one supervising site coordinator per site and an average of one activity leader for each grade of youth served. The activity leaders worked closely with youth, supervising a group (from 12 to 20 children) throughout the afternoon, teaching literacy lessons, and leading enrichment activities (sometimes with outside instructors). These staff members typically worked with the same group each afternoon for an entire year. The activity leaders were young and ethnically diverse: 82% were 25 years old or younger, the majority (53%) was Latino, 17% were Asian, and 15% were African American. Most (84%) reported prior experience working with children, but fewer than half (43%) had previously provided literacy instruction. These demographic characteristics are not unlike those of many after-school staffs (Dennehy and Noam 2005; National After School Association 2006).

The Continuous Program Quality Improvement Cycle

To support these activity leaders’ literacy efforts, the creation of a new “literacy director” position in each city proved to be crucial to the sites’ success at implementing quality literacy activities. While other senior staff were tasked with running and sustaining the after-school programs overall, the primary responsibility of the literacy directors was to improve the quality of the literacy programming at each of their sites. The number of literacy directors hired in each city depended on the number of sites and children served. On average, one literacy director was hired for every 600 participants.

The literacy directors were expected to implement a variety of activities in order to improve the quality of programming, though during Year One such implementation was often inconsistent. By Year Two, sites identified three primary strategies for improving quality that complemented each other, and many literacy directors attempted to implement the three-component “continuous program quality improvement cycle” (see Fig. 1). Sites varied in how complete, how consistently, and how well they carried out these components, but the three main components were:

Step 1: Targeted Training Sessions Throughout the Year

One of the first lessons learned by CORAL senior staff was the necessity of aligning their training programs with their primary goal, that of improving participants’ literacy skills. Earlier training sessions covered a wide variety of topics, like CPR and family visits, that did not directly relate to that goal. Recognizing that program quality was not improving as desired, literacy directors soon revised the training schedule to focus initially on “read alouds” and independent reading, two elements have been identified as key to reading gains (Ryan et al. 2002). Rather than providing a single training session on an array of topics, then, the new training schedule provided multiple in-depth training sessions focused only on these two crucial skills, even when doing so meant postponing training in other areas.

Program directors in the CORAL cities also learned that effective training sessions required ongoing follow-up. When CORAL staff members received only beginning-of-the-year training sessions, even when it was an extensive, week-long program, they faced implementation challenges. Many staff members reported that they had covered so many topics in their initial training sessions that they could only remember pieces of what they had been told. Others noted that when they started teaching, challenges arose for which they felt unprepared. Such responses support previous research that has found that the effects of training sessions on program quality tend to fade over time (Blau 1997; Clarke-Stewart et al. 2002). By the middle of Year One, therefore, ongoing training schedules had been implemented in all CORAL cities. The sites trained staff members on basic skills and program goals first and moved on to more difficult or specialized skills in later sessions.

Step 2: On-Site Observations and Coaching

Studies of professional development have found that after-school staff members often appreciate training sessions but, for a variety of reasons, fail to apply what they have learned to their daily work (Buher-Kane et al. 2006). The goal of observation and coaching—strategies that generally occurred on-site and one-on-one—was to fill this gap between training and real program improvement. The CORAL programs each developed processes for activity observation, including customized observation tools to help identify the key behaviors and strategies activity leaders needed to implement if they were to properly deliver the curriculum. These standardized tools ensured activity leaders were all judged by the same criteria, and they allowed senior staff members to feel confident in their ability to compare observations across sites and over time.

In a best case scenario, observations were followed shortly by coaching, in which senior staff members worked directly with instructors to help them improve and implement new techniques. Unlike group training, coaching sessions provided staff with individual and targeted feedback. Coaching generally began with the senior staff member providing feedback after an observation, sharing what had been documented and acknowledging the strengths and weaknesses that were identified. The senior staff member then discussed recommendations and strategies for improvement, often modeling these new strategies or providing the activity leader with useful resources such as books or Web sites. Together, the senior staff member and activity leader created an “action plan” that identified immediate, short-term, and long-term goals for improvement.

Step 3: Measuring Progress Through Data Collection and Analysis

The third step in a continuous improvement cycle was to collect and use data aligned with program goals to provide supervising staff with information about which areas needed more support. Activity observation was the primary method by which supervisors collected data. Since observations were already conducted to monitor activity leaders, only minimal additional effort was required to record observation notes in a standardized form and to store those notes in a central location. This practice allowed senior staff members to review and compare findings across instructors and across sites and to use those trends to inform group training sessions. Activity attendance was also collected with minimal additional effort as daily attendance was already being collected. This activity-specific attendance tracking allowed programs to monitor not only overall attendance figures but also the frequency with which youth were exposed to specific program content.

Measures

Activity Observations

Observations were conducted of a sample of the literacy activities provided to classes of 3rd and 4th graders in Year One and 4th and 5th graders in Year Two. In Year One, 56 classes of children were observed at three points across the year during the time the activity leader provided them with literacy programming; in Year Two, due to changes in the number of CORAL sites and groups per site, 43 classes of children were observed.

The observation tool that was developed for this project involved recording a running narrative of the instruction provided to participants. Researchers recorded whether each of the six balanced literacy strategies —read alouds, book discussions, writing, vocabulary and skill development, and independent reading—occurred during the observations. For those that did occur, researchers assigned each a score ranging from “1” indicating the extremely poor implementation of a key element, to “5” indicating outstanding implementation of that strategy (see Appendix for examples of the quality indicators used in each scale). Researchers submitted their activity ratings and the narrative to a lead researcher who reviewed all ratings and flagged any scores that seemed inconsistent with the narrative to be reviewed with the observer. Interrater reliability was not calculated, however, all observers participated in a full-day group training session, followed by multiple days of working on-site, one-on-one with a lead researcher.

Using data from these observations, each class—and, accordingly, each youth within the class—was assigned a composite literacy quality score based on how strongly the instructors had implemented all components of the balanced literacy approach over the course of the year. The composite literacy quality scores ranged from “1” (poor implementation of the model) to “5” (strong implementation) (see Appendix for more detail on the scoring). Unlike the activity ratings, which measured the quality of a particular activity within a lesson, the composite literacy quality score referred to the overall quality of literacy instruction provided to children within a particular class across the year.

Reading Assessment

The Jerry L. Johns Informal Reading Inventory (IRI) was the primary measure of participant reading levels. The assessment asked children to read aloud from grade-appropriate word lists, to read aloud from a series of short passages, and to answer between five and ten comprehension questions based on each passage. Results were used to assign an “independent reading level” to each child, an assessment of what the child could read fluently and accurately, with 99% comprehension, and without any assistance. For an additional element of consistency, all IRIs were reviewed by a second team of researchers, who double-checked the reading-level assignments based on the researchers’ hand-recorded notes of the children’s responses. No interrater reliabilities are available, however, the IRI scores for the children in the study correlated well with their scores on one of the state tests given each spring, the California Standards Test-English Language Arts section (CST-ELA). Correlations between the CST-ELA and IRI were 0.65 in the spring of 2005, and 0.59 in the Spring of 2006.

CORAL Staff Survey

In Spring 2005, all CORAL staff members were sent a two-page survey with brief questions about their educational background, experience, and training. The response rate was 73%.

Analysis

Two-level hierarchical linear modeling (HLM) was used to assess whether youth who experienced classes of different implementation quality, namely classes of different composite literacy quality scores, gained differentially. At the first level, end-of-year IRI scores were modeled as a function of student level factors—the demographic characteristics (such as English Learner status, gender, race, ethnicity and grade), their beginning-of-year IRI score and days between IRI administrations were controlled. At the second level the effect of the program was estimated separately by composite literacy quality score.

Results

Activity Quality

Activity observations, summarized in Table 2, revealed that the literacy activities provided were generally of moderate quality at the beginning of the evaluation. During Year One, especially early on, staff members struggled to consistently implement the literacy activities. Twenty-four percent of the group activities observed in the first year included only one or two of the key literacy components, and staff members’ approaches to these activities varied significantly, sometimes including hallmarks of quality but other times missing essential components. However, the quality improved significantly over the 2 years of the study as the staff participated in the CQI cycle. By Year Two researchers observed increased consistency across classrooms and improved quality in all literacy elements. The activity leaders regularly implemented all components of balanced literacy and did so with higher quality than before. Comparisons of Year One averages for each dimension of quality to Year Two averages revealed improvements along each type of balanced literacy strategy. In Year Two, less than 1% of activity leaders implemented two or fewer literacy components during an observation.
Table 2

Observational ratings of literacy activity quality

 

Year 1

Year 2

No. of groups observed

56

43

Literacy strategies

Frequency of implementationa (%)

Average rating

Frequency of implementation (%)

Average ratingb

Read alouds

77

2.84

99

3.59***

Book discussions

56

2.11

94

2.88***

Vocabularyc

2.19

2.62**

Writing

60

2.23

97

3.15***

Skill-development activities

25

1.52

49

1.76

Independent reading

88

2.81

99

3.35***

Composite literacy quality

Number of groups

Percent of groups (%)

Number of groups

Percent of groups (%)

Score 1

33

59

5

12

Score 2

3

5

0

0

Score 3

20

36

31

72

Score 4

0

0

7

16

Score 5

0

0

0

0

Note: There were fewer groups to observe in Year Two because fewer of these “year older” youth participated and the sites organized participants into fewer groups for activities

** p < .01, *** p < .001

aThe percent of observations during which each literacy strategy was observed

bSignificance levels noted in the Year Two column signify where change from 1 year to the next was significantly different

cBecause vocabulary development strategies could be observed in tandem with any other literacy strategy, it was not marked as absent or present at any given time but was only rated on the scale of 1–5; thus, only an average rating for this strategy is available

During the first year of implementation, the majority of the instructional groups (59%) were assigned a composite literacy quality score of 1, representing the category of lowest quality activities. By Year Two, however, only 12% of groups were classified with a composite score of 1, and 16% reached a composite score of 4, a level not reached by any groups during Year One. Program directors interpreted this marked improvement in quality as a demonstration of the effectiveness of the staff training and coaching sessions implemented over the course of the study.

Reading Gains

Paired t tests at the end of Year One indicated a significant increase in IRI scores from a baseline average of 2.2 to a first follow-up average of 2.5, t (380) = 4.27, p < .001. This represents 0.3 of a grade level improvement in reading, an appreciable amount for children who were already well behind in reading. Children exposed to higher quality literacy activities displayed greater literacy gains over the course of Year One than those exposed to low quality literacy activities. A comparison of means indicate that children who participated in higher-quality activities (with a composite score of 3: the highest level obtained in Year One) experienced an average reading-level gain of 0.45, whereas children who participated in lower-quality (with a composite score of 1) activities had an average gain of only 0.26. Results of two-level hierarchical linear modeling (HLM) analysis confirm the hypothesis that gains were positively correlated with literacy quality. At the first level, end-of-year IRI scores were modeled as a function of student level factors; the demographic characteristics (such as English learner status, gender, race, ethnicity and grade), their beginning-of-year IRI score and days between IRI administrations were controlled. At the second level the effect of the composite literacy quality score was estimated to be significant and positive, β = 0.11, p < .05, suggesting that there was a positive relation between literacy quality and reading gains.

In Year Two, the quality of literacy activities improved, and so did the reading gains. Paired t tests revealed an overall average reading gain of 0.44 reading levels, t (367) = 6.33, p < .001 between Fall 2005 and Spring 2006, a 39% increase over the average gain observed during the first year of the study. HLM analyses could not be used in Year Two because there was not enough variation in composite literacy quality scores across classes. Almost all groups (88%) achieved a composite quality score of 3 or above. Notably, however, the overall 0.44 average reading gain observed over Year Two was comparable to the Year One average gain of 0.45 observed only among children exposed to higher quality literacy activities. While not conclusive, these findings lend credence to the theory that exposure to more consistent and higher quality programming is related to greater reading gains.

Discussion and Conclusion

At the outset, this article summarized the limited effectiveness of after-school programs at increasing academic achievement and literacy in particular. Although there was reason to believe that the quality of the programming assessed may influence outcomes, research in this area is still inconclusive and there is even less knowledge of how to consistently improve program quality. This study addressed several of these issues, and points to important directions for future research.

First, exposure to consistent well-delivered literacy strategies is associated with greater reading gains. The results from the first year of the evaluation indicated greater gains over 5 months on the individualized reading assessment (0.45 grade-levels in reading) for children exposed to consistent and higher quality implementation of the literacy strategies, while those children who were exposed to inconsistent or low quality implementation of the literacy strategies gained only 0.22 grade-levels in reading. In the second year of the evaluation, when quality had improved across all of the groups, the average reading gain for all children in the sample was 0.44 grade levels—comparable to the Year One average gain of 0.45 for children exposed to higher-quality classrooms.

Second, application of the continuous quality improvement strategies produced improvements in the quality of literacy activities in a fairly short amount of time. Program improvement was not achieved through one-time, beginning-of-the-year training sessions. The CORAL sites’ experiences showed that this traditional approach to staff development was ineffective at creating lasting change. This experience mirrors research findings reviewed in the introduction that common approaches to staff development, such as one-time demonstrations of new curricula and informal monitoring of staff progress, rarely make lasting impacts on staff members’ skill development or program quality (Blau 1997; Clarke-Stewart et al. 2002; DuBois et al. 2002). Improvements in programming quality were achieved when more continuous and ongoing efforts at quality improvement strategies were employed.

Information gathered during interviews with program staff and partner agencies helped us identify several specific strategies that helped to explain how the CORAL cities were able to improve the quality of their literacy programming over time. What seemed most important was hiring a full-time literacy director that could implement a coordinated process for providing professional development for activity leaders and site coordinators. Only two of the five cities began Year One with a literacy director in place. But by the end of Year One, all cities had them in place. In addition, in Year One, most sites only had one or two of the quality improvement components in place. Though the fundamentals of the CQI cycle were eventually implemented in all CORAL cities in Year Two, the steps were implemented with varying levels of consistency and depth. It is worth noting that the two cities that implemented these steps the most thoroughly were the same ones that received the highest ratings for activity quality at the end of Year Two.

Implementing this CQI cycle was not without additional costs. CORAL programs spent an average of $42,300 per literacy director, each of whom supervised an average of 3.5 sites (about 600 youth). CORAL staff had to be paid for their training hours which added $830 to the cost of every activity leader and $1,484 to the cost of a site coordinator over a year. Weekly activity observations cost the cities about $1,437 annually. Some additional one-time costs were incurred to develop a data system or in hiring a consultant to deliver material. Overall, the investments necessary to increase quality represented an average of 13% of programs’ budgets; the results of this study, however, showed that these investments had a payoff in terms of children’s reading skills.

One limitation of the present study is that the evaluation design did not include a comparison or control group; therefore, we cannot conclude that the gains made by the CORAL youth are attributable to programming. However, the finding that the quality of CORAL literacy programming related to reading-level gains is consistent with the hypothesis that the program positively affects outcomes. It is possible that these findings could be biased by the attrition rates of youth over the course of each school year. However, because the quality of the literacy programming was not a predictor of attrition, and the evaluation followed youth exposed to both low and higher quality literacy programs, the results may still be suggestive of the relationship between better program quality and reading gains.

Further research is needed to refine the findings from this evaluation, specifically in terms of the details of best practices for program improvement. The five cities in the CORAL initiative, for example, do not constitute a large enough sample to judge precisely the best senior staffing structure for monitoring quality. Future research is also needed to evaluate and provide greater specification about the relation between program quality and participant outcomes. For example, when program quality reaches even higher levels, such as those equivalent to a composite literacy quality score of five, do participant outcomes increase substantially? On the other hand, is there a certain minimum threshold of quality which programs must meet to have an effect on academic performance?

The evaluation of the program detailed here adds to the growing consensus that program quality is a predictor of student outcomes, and it reveals a few key elements that make program improvement possible: a literacy director, ongoing time and opportunity for staff training and coaching, and dedication to and investment in data collection. These findings present important lessons for after-school providers and funders at a time when programs are increasingly expected to generate improved academic outcomes as a primary goal.

Acknowledgments

The authors would like to thank their colleagues who collaborated on the evaluation: Molly Bradshaw, Julie Goldsmith, Nora Gutierrez, and Gary Walker. The study also would not have been possible without the contributions of CORAL staff members, students, and parents. A grant from the James Irvine Foundation supported the writing of this piece as well as the longitudinal evaluation on which it is based. That evaluation led to a series of publications that address a variety of topics including the process of adding academics to an after-school program, academic outcomes for study participants, the needs of English language learners in after-school programs, and tools for after-school practitioners. These reports can be downloaded at the Public/Private Ventures’ Web site, http://www.ppv.org.

Copyright information

© Society for Community Research and Action 2010