Cash incentives for high school students to perform better in school are growing in popularity, but we understand very little about them. Does paying students for better Advanced Placement (AP) test scores encourage enrollment in AP classes? Does it lead to more students taking the tests and achieving passing scores? Do cash incentives lead to more students going to college?
I set out to determine the impact of a cash incentive program operating in a number of Texas high schools. The Advanced Placement Incentive Program (APIP) is a novel initiative that includes cash incentives for both teachers and students for each passing score earned on an Advanced Placement exam. The program is targeted to schools serving predominantly minority and low-income students with the aim of improving college readiness. The APIP was first implemented in 10 Dallas schools in 1996 and has been expanded to include more than 40 schools in Texas. The National Math and Science Initiative awarded grants to Arkansas, Alabama, Connecticut, Kentucky, Massachusetts, Virginia, and Washington to replicate the APIP and plans to expand these programs to 150 districts across 20 states.
Using data from the Texas Education Agency, I evaluated how the APIP affected education outcomes in participating schools in the years following implementation. I studied whether the program increased AP course enrollment and the share of students sitting for AP (or International Baccalaureate [IB]) examinations. Since improved AP outcomes may not necessarily reflect increased learning and could come at the expense of other academic outcomes, I also looked beyond these immediate effects to the broader set of outcomes, such as high school graduation rates, SAT and ACT performance, and the percentage of students attending college.
According to my assessment, the incentives produce meaningful increases in participation in the AP program and improvements in other critical education outcomes. Establishment of APIP results in a 30 percent increase in the number of students scoring above 1100 on the SAT or above 24 on the ACT, and an 8 percent increase in the number of students at a high school who enroll in a college or university in Texas. My evidence suggests that these outcomes are likely the result of stronger encouragement from teachers and guidance counselors to enroll in AP courses, better information provided to students, and changes in teacher and peer norms. The program is not associated with improved high school graduation rates or increases in the number of students taking college entrance exams, suggesting that the APIP improves the outcomes of high-achieving students rather than those students who may not have graduated from high school or even applied to college. Nonetheless, APIP may be an exceptionally good investment. The average per-student cost of the program, between $100 and $300, is very small relative to reasonable estimates of the lifetime benefits of attending and succeeding in college.
The AP Incentive Program
The program is run by AP Strategies, a nonprofit organization based in Dallas, Texas. The heart of the program is a set of financial incentives for teachers and students based on AP examination performance. The APIP is entirely voluntary for schools, teachers, and students.
The Advanced Placement program has 35 courses and examinations across 20 subject areas. Students typically take AP courses in the 11th and 12th grades. The courses are intended to be “college level,” and most colleges allow successful AP exam takers to use passing scores to offset degree requirements. Although it is unclear whether AP courses are actually equivalent to courses at all colleges, the fact that selective colleges pay considerable attention to a student’s AP scores in the admissions process demonstrates that the exams are considered to be revealing about a student’s likely preparation for and achievement in college. The exams are graded 1 through 5, with 5 being the highest and 3 generally regarded as a passing grade.
The APIP includes teacher training conducted by the College Board and a curriculum for earlier grades that prepares students for AP courses. At the top of each “vertical team” of teachers in APIP schools is a lead teacher who not only instructs students, but also spends time providing training for other AP teachers. Vertical teams include teachers whose grade precedes those in which AP courses are offered. For instance, a vertical team might create a math curriculum designed to prepare students for AP calculus in 12th grade. The curriculum might start as early as the 7th grade. This aspect of the APIP suggests that some of its benefits might not be felt until several years after it is first introduced at a school.
AP courses are taught during regular class time, and generally substitute for other courses in the same subject. In addition to the AP courses, there may be extra time dedicated to AP exam preparation. For example, in Dallas, the APIP offers special “prep sessions,” where up to 800 students gather at a single high school to take seminars from AP teachers as they prepare for their AP exams.
In a school that has adopted APIP, students in 11th and 12th grade receive monetary incentives for performance on AP exams, which are intended to encourage participation and effort in AP courses. The amount paid per exam differs across districts. Students receive between $100 and $500 for each score of 3 or above in an eligible subject for which they took the course. This could amount to several hundred dollars for a student who takes and passes several AP examinations during the 11th and 12th grades. For example, one student earned $700 in his junior and senior years for passing scores on AP exams. Since the students must attend the course and pass the AP exams to receive the rewards, students who did not enroll in the AP courses would not take the exams in an attempt to earn the cash rewards. The cash rewards more than offset student costs. The College Board’s standard charge per examination is $82, and a fee reduction to $60 is granted to those students with demonstrated financial need. APIP pays half of each student’s remaining fee, so students’ out-of-pocket expense is very small.
Lead teachers receive an annual salary bonus of between $3,000 and $10,000, and an additional $2,000 to $5,000 bonus opportunity based on results. Pre-AP teachers earn an annual supplement of between $500 and $1,000 per year for extra work. AP teachers receive between $100 and $500 for each AP score of 3 or higher earned by an 11th or 12th grader enrolled in their course. Like the student rewards, the amount paid per passing AP score and the salary supplements vary across districts. Overall, though, the incentive program can deliver a considerable increase in compensation for teachers.
How, then, do schools adopt APIP? And who pays? The total cost of the program ranges between $100,000 and $200,000 per school per year, depending on the size of the school and its students’ propensity for taking AP courses. Private donors defray between 60 and 75 percent of the total cost of the program, and the district covers the remainder. Districts usually pay for teacher training and corresponding travel and lodging, teacher release time, and some of the supplies and equipment costs. The donors fund the bonuses to students and teachers associated with passing AP scores, stipends to teachers for attending team meetings, and some of the supplies and equipment costs. Today, the districts may be able to fund their contributions from statewide funding and No Child Left Behind. When APIP began in 1996, however, such funds were not available.
The donors choose the subjects in which rewards will be offered and determine the size of the financial rewards. While there are some differences across districts, English, math, and sciences are rewarded in most. Once a donor has been identified, AP Strategies matches the benefactor with an interested district. Where there are several districts that are competing for the same donor, the donor’s preference determines the district or the schools within the district that will implement APIP.
Forty-one Texas schools have adopted the incentive program to date, and 61 schools will have adopted the program by the 2008–09 academic year. Donor availability and preferences are the primary reason some schools have adopted APIP and others have not. According to the executive vice president of AP Strategies, Walter Dewar, “Many districts are interested in the program but there are no donors. So there is always a shortage of donors.”
Comparing Schools
Schools that have been selected for APIP look quite different from schools that have not yet been chosen and may never be chosen. Participating schools have much larger enrollments. They also have much larger black and Hispanic enrollment shares and lower white enrollment shares. APIP schools enroll relatively more Limited English Proficient (LEP) students and students who are classified as economically disadvantaged. About 90 percent of the schools selected for the program are located in large or medium-sized cities, compared to fewer than one-quarter of all schools in the state.
I evaluated the effect of APIP by making two sets of comparisons. I compared the education outcomes of students enrolled in an APIP school before implementation to the outcomes of students who were enrolled in the school after the program began. I then compared changes in the education outcomes across cohorts within APIP schools to changes, over the same time period, at schools where the program was never adopted. This second comparison with non-APIP schools enables me to separate out the impact of any policy, such as the Texas Advanced Placement incentive program or the 10 percent rule (every student in Texas in the top 10 percent of her graduating high-school class is guaranteed a spot at the public university of her choice), that could have occurred at the same time as APIP implementation and could otherwise be confused with the effect of APIP. In all the statistical analyses I performed, I took into account the influence of school characteristics, such as enrollment size and student demographics.
Comparing the outcomes of APIP schools to the outcomes of other schools could confuse the effects of the program with the difference between schools that want to participate in APIP and those that are not willing or able to. To avoid this, I compared changes in schools where APIP had been implemented to changes in schools that signed up for APIP and were waiting for a donor to fund their APIP implementation.
I obtained data on school demographics, high school graduation rates, and college entrance examinations from the publicly available Academic Excellence Indicator System (AEIS) on the Texas Education Agency web site. Data are aggregated by school and span the years 1994 through 2004. College enrollment data come from the Texas Higher Education Coordinating Board web site and are available for the years 2002 through 2005. The final dataset combines these publicly available data with a listing of program schools by year provided by AP Strategies.
AP Classes and Examinations
At the 10 Dallas high schools that adopted the program in 1996, the number of students taking and the number passing AP exams in English, math, and science increased markedly the year following adoption and have continued to climb (see Figure 1). But is this due to APIP?
I found that APIP does appear to boost both AP course enrollment, albeit slowly, and the number of students taking AP exams. There is weak evidence of an increase in course enrollment in the first and second years following program adoption. In year three and onward, APIP appears to produce a large boost in AP course enrollment; the number of students in AP courses increases by 21 percent.
There is an immediate increase in the number of 11th- and 12th-grade students taking at least one AP exam or IB exam, another test taken for college credit (see Figure 2). In the first year of APIP, the number of students taking AP and IB examinations increases by 2 percentage points, followed by a 4-percentage-point increase in years two and beyond, with a similar boost for boys and girls. With an initial average rate of 18 percent of students taking AP or IB exams, this is a relative increase of 11 percent in year one of APIP and 23 percent by year two. The fact that course enrollment numbers do not increase until year three but exam-taking numbers rise sooner suggests that much of the initial increase in AP exam taking came from students who, in the absence of the APIP, would have taken the course but not the exam.
Previous studies have shown that minority and low-income students tend to participate in AP courses and take AP exams at lower rates than middle-class white students at the same high schools. So did APIP improve AP participation by minority and low-income students? My results show that the campuswide increases in the percentage of students in 11th and 12th grades who take AP or IB exams are driven primarily by increased participation among black and Hispanic students. The results do not show any statistically significant effect of APIP on the proportion of white students who take at least one AP or IB exam. This does not mean that the total number of AP and IB exams taken by white students did not increase at APIP schools. It is entirely possible that white students who took one AP exam now take more AP courses and exams.
To learn more about the reasons behind the increased AP participation, I spoke with guidance counselors at three different APIP high schools in Dallas. Their comments indicate that there was a schoolwide campaign to increase participation in AP courses after APIP adoption. Two of the three high schools hired additional guidance counselors to improve the school’s ability to identify those students who should be encouraged to take AP courses. At all three high schools, the guidance counselors received explicit instructions to identify those students who should be taking AP courses and to advertise AP courses. Guidance counselors and AP teachers sell the AP program to students who are interested in going to college by touting the scholarship money that can follow good AP exam scores. Counselors and teachers also emphasize the tuition that can be saved by graduating from high school having already earned college credits. In addition, the counselors reported that certain barriers to taking AP courses have been removed; at one high school, there used to be a minimum class rank that a student had to have in order to take AP courses, but after adoption of APIP, any interested student was allowed to take AP courses.
All the guidance counselors with whom I spoke mentioned a shift in student and teacher attitudes toward AP courses. Following encouragement from counselors and teachers, students now view AP courses as difficult ones that anyone can take, rather than being only for the very brightest of students. Of course, the financial incentives to students and teachers might be responsible for the increased teacher and student effort, but counselors downplayed these aspects of the program.
After the AP Test
In his 2004 State of the Union Address, President Bush announced a plan in which he proposed an increase from $24 to $52 million annually for the AP program authorized in the No Child Left Behind Act to support state and local efforts to increase access to AP classes and tests (and other challenging curricular end-of-course examinations) for students in low-income schools. Several states have implemented programs with the same objective. A good example is the Western Consortium for Accelerated Learning Opportunities (WCALO) consisting of Arizona, Colorado, Hawaii, Idaho, Montana, New Mexico, Oregon, South Dakota, and Utah. The rationale behind this push to increase AP participation is the observation that students who take AP courses and examinations are much more likely to enroll and be successful in college, as measured by college GPA and graduation rates. Students who take more rigorous math and science courses in high school, such as AP courses, also have significantly higher SAT scores.
With these claims in mind, I decided to study how far the impact of APIP extends beyond AP course enrollment and exam taking. I found there is, on average, a 22 percent increase in the share of students scoring above 1100 on the SAT or above 24 on the ACT. Broken down by year, there is a 19 percent increase in the number of students scoring above 1100 on the SAT and above 24 on the ACT exam in the first year of the program, a 22 percent increase the second year, and a 33 percent increase by the third year on (see Figure 3). The effect in the first year is somewhat large in light of the fact that some seniors might have taken the SAT or ACT after being exposed to APIP for just half the school year. It may be that the improvements in SAT and ACT performance are not solely due to exposure to AP courses; a related increase in effort on college entrance exams may have been caused by a heightened desire to get into a good college. In general, though, the effects of the program in the first and second year are likely due to the effects of the monetary incentives for students and teachers, as well as accompanying improvements in AP instruction. Any improvement in pre-AP instruction produced by the vertical teaching teams would affect the outcomes of graduates only after three or four years.
Looking at the SAT and ACT performance of high school graduates by racial group, the percentage changes (about 5 percentage points from the third year on) are similar among white, black, and Hispanics, but the differences in impact relative to the prior performance of each group are sizable. While there is about a 12 percent relative increase in white students scoring above 1100 on the SAT and above 24 on the ACT, there is a 50 percent relative increase for Hispanics, and an 80 percent relative increase for black students. Given that Hispanics and blacks are typically underrepresented at the top of the graduating class, they have more room for improvement.
There is also a 7 percent increase in the number of students attending a college or university in Texas (see Figure 3), and this change remains roughly constant from year to year. Interestingly, however, APIP adoption does not improve a high school’s graduation rate or increase the percentage of students who take the SAT or ACT.
So what could explain why APIP does not produce improved educational outcomes across the board? Why, for example, would there be an increase in the percentage of students matriculating in college but no increase in the percentage of students who sit for college entrance exams? One possible explanation is that the absence of an increase in SAT and ACT taking suggests that APIP may not lead more students to decide to apply to college. Instead, APIP might help students who are already interested in attending college to gain admission and encourage them to enroll. Transcripts burnished with AP courses and passing scores on AP exams could increase the likelihood of admission and improve financial aid offers. APIP could also make college more affordable, as passing scores on AP exams create tuition savings. Because low-income students are sensitive to tuition costs, the potential tuition savings created by the ability to earn college credit, or even increased financial aid, could induce more of these students to enroll in college once accepted.
As I discovered these positive changes in educational outcomes, I began to wonder if AP courses were improving student performance in these other areas or if bright students were transferring to APIP schools to take advantage of AP courses. This concern grew as I noticed that the implementation of APIP is associated with an almost 6 percent increase in 12th-grade enrollment once the program has been in place one year.
The fact that students at APIP schools were no more likely to graduate from high school or take the SAT or ACT makes it improbable that an influx of high-performing students is responsible for the improvements observed for other outcomes. Still, I decided to conduct a more direct test to rule out the possibility that improvements in educational outcomes were due to migration of high-performing students into APIP schools. Focusing on high school graduates who were at the same school for all four years, I was able to obtain counts of the number of white and Hispanic graduates scoring above 1100 on the SAT and above 24 on the ACT. Comparing the number of students scoring at these levels before and after the adoption of APIP, I found that, by the third year of APIP, the number of white and Hispanic students scoring above 1100 on the SAT and above 24 on the ACT increased by 26 percent and 18 percent, respectively. I also obtained counts for white, black, and Hispanic graduates scoring above 900 on the SAT or above 19 on the ACT exams. By the third year of the program, APIP increases the number of white and Hispanic graduates scoring above 900 on the SAT and above 19 on the ACT by 26 percent and 38 percent, respectively, although there is no change for black students. In sum, although there may be some migration by high-performing students to APIP schools, schools that adopt APIP see better scores on college entrance exams among students who have always attended their school.
On a related note, I cannot rule out the possibility of an influx of quality teachers to APIP schools during the program’s first year. This would not diminish the success of the program, but would suggest that improvements in teacher inputs were a part of the story.
Conclusion
Through APIP, the interests of schools, teachers, and students were aligned. Guidance counselors had the impetus to advertise and inform students of the benefits of the AP program, teachers had the incentive to increase AP course enrollment, and students possessed greater motivation to take the courses and exams. The result was a change in the educational culture in a select group of Texas high schools, which in turn led to improved student outcomes.
While I show that the program is likely to have lasting effects on students because they are more apt to attend college, it would be useful to determine the long-term effects of APIP by observing the students affected by APIP when they go to college and into the labor force. If this program increases a student’s likelihood of attending college, elevates the quality of college attended, and reduces the time it takes to graduate from college, the costs of the program on a per-student basis would be far less than the average increase in lifetime earnings. That would be a whole new kind of smart money.
Kirabo Jackson is assistant professor of labor economics at Cornell University.