Many of us, if we’re lucky, can fondly recall a time in elementary school when our parents proudly posted one of our A papers on the refrigerator door. Maybe it was a spelling test or set of multiplication problems—no matter. What mattered, though, was the outstanding achievement that mom, dad, and kid believed was embodied in that A, and the pride and satisfaction that we felt in seeing it every time we opened the fridge for a sandwich.
Back then, we didn’t question whether that A was actually earned. We assumed that we had mastered whatever was being graded and our hard work had paid off.
Unfortunately, it’s getting harder and harder to assume that an A still represents excellence. And that’s a real problem.
Here at Fordham, we’ve had a longstanding interest in helping to ensure that parents know the truth about how their kids are doing in school. More than a decade ago, we published The Proficiency Illusion—a groundbreaking study that found that levels of reading and math “proficiency” varied wildly from state to state because of where states set their “cut scores.” What it took to pass a state test ranged from reading or doing math at the 6th percentile nationally all the way up to the 77th.
That’s why, when the Common Core Standards and related consortia tests came onto the scene, we saw them as invitations to increased rigor, transparency, and truth-telling. Finally, parents would receive accurate, useful information about their children’s academic challenges and whether they were on track for college and career. The news might not always be positive. But knowledge is power, right?
Except the message has yet to hit home. The tests are indeed tougher than ever, with Education Next and others finding that most states now set the proficiency bar at much higher levels than before. Yet a 2018 survey published by Learning Heroes, a parent information group, found that 90 percent of parents believe their child is performing at or above grade level; two out of three believe their child is “above average” in school. Eighty-five percent say their kid is on track for academic success—and just 8 percent believe that their child is performing below average.
That’s a lot of misinformed parents, given that one-third of U.S. teenagers, at most, leave high school ready for credit-bearing courses.
One of us recently mused that perhaps the reason dismal state test scores don’t resonate with parents is because they conflict with what parents see coming home from school. Who knows their kids better: their teachers or a faceless test provider? The teachers, of course. But what if the grades that teachers assign don’t reflect the state’s high standards? What if practically everyone is getting As and Bs from the teacher—but parents don’t know that?
That was the impetus for Fordham’s newest study, Grade Inflation in High Schools (2005–2016). We wanted to know how easy or hard it is today to get a good grade in high school and whether that has changed over time. We can’t develop solutions until we’ve accurately identified the problem. And in this case, we suspect that one reason for stalled student achievement across the land is that historically trusted grades are telling a vastly different story than other academic measures.
So we teamed up with American University economist Seth Gershenson, who is keenly interested in this topic, and whose prior research on the role of teacher expectations in student outcomes made him a perfect fit to conduct the research.
The study asks three questions:
1. How frequent and how large are discrepancies between student grades and test scores? Do they vary by school demographics?
2. To what extent do high school test scores, course grades, attendance, and cumulative GPAs align with student performance on college entrance exams?
3. How, if at all, have the nature of such discrepancies and the difficulty of receiving an A changed in recent years?
Although other studies have addressed similar questions, this is the first to use official transcript data and standardized test scores for the entire population of eligible students in a state. By including such a broad set of students, rather than a subset, Dr. Gershenson’s analysis breaks new ground.
He focused on student-level data for all public school students taking Algebra 1 in North Carolina from the 2004–05 school year to 2015–16. He had access to course transcripts, end-of-course (EOC) exam scores, and ACT scores. His primary analysis compared students’ course grades with their scores on EOC exams to evaluate the extent of grade inflation. The study yielded three key findings, all of which should cause concern about present-day grading practices.
1. While many students are awarded good grades, few earn top marks on the statewide end-of-course exams for those classes.
2. Algebra 1 end-of-course exam scores predict math ACT scores much better than course grades.
3. From 2005 to 2016, more grade inflation occurred in schools attended by more affluent youngsters than in those attended by the less affluent.
These findings raise several implications.
First, course grades and test scores each have their place. Just because EOC scores better predict math ACT scores than do course grades, the point isn’t that tests are reliable and grades aren’t. Much research shows that students’ cumulative high school GPAs—which are typically an average of grades in twenty-five or more courses—are highly correlated with later academic outcomes.
Grades and test scores each provide valuable information because they measure different aspects of student performance and potential. Grades reflect not only students’ mastery of content and skills, but also their classroom behavior, participation, and effort. And test scores tend to be informative measures of general cognitive ability. We need both.
Yet parents don’t appear to value both equally. When there’s a big difference between what the two measures communicate, the Learning Heroes data indicate that parents are apt to take the test scores less seriously—especially if the scores are low. “My child doesn’t test well,” goes the refrain. In our view, this is a form of confirmation bias that’s leading to greater complacency not only on the part of students, but parents too. Why should youngsters invest more time to learn something if an A or B grade says they already know it? Why should mothers work to help their children catch up if grades don’t signal that they’re behind? That’s particularly true when parents are blissfully unaware of how widespread those good grades are. The sad fact is that some will only become aware that their child is marching off a cliff with regard to college readiness—along with many others—after it’s too late.
While external exams are valuable sources of information, educating teachers about high expectations is key. Dr. Gershenson suggests in the study that one way to combat grade inflation is through content-based external tests like EOC exams. Having an external measure that is not developed or graded by the classroom teacher can be an effective way to preserve high standards, and it also serves as an “audit” of course grades and progress. That’s how EOCs were used in the current analysis, and that’s the role that Advanced Placement exams play for many high school students.
But what if teachers don’t truly know what high standards look like? “Often teachers—and principals—have a definition of excellence that defaults to the best work produced in their classroom or school; if the ‘best’ work is not great, expectations for all their students inevitably shift downwards,” Success Academy’s Eva Moskowitz recently wrote. “Ultimately, holding students to a high bar requires a zealous and persistent commitment by everyone—from superintendents, principals, and parents, to assistant teachers and office staff…[who must] give students the realistic feedback and dedicated support they need to meet the ambitious expectations of which we know they are capable.”
Hear, hear! We couldn’t have said it better. The question is: Are we ready to take this charge seriously?
— Amber M. Northern and Mike Petrilli
Amber Northern is senior vice president for research at the Thomas B. Fordham Institute. Mike Petrilli is president of the Thomas B. Fordham Institute, research fellow at Stanford University’s Hoover Institution, and executive editor of Education Next.
This post originally appeared in Flypaper.