Our team at NewSchools recently released a report titled, Using Expanded Measures of Student Success for School Improvement. In it, we share some on-the-ground lessons from innovative public schools in our portfolio. Some aspects of it caused Checker Finn angst, prompting what he dubbed a “crotchety old guy” blog. As usual, crotchety or not, his observations are smart and useful. In the spirit of dialogue, I wanted to respond to some of his points.
In 2015, we launched an innovative schools portfolio, looking for teams starting new district and charter schools designed to help every student build a strong academic foundation and important mindsets, habits, and skills correlated with success in young adulthood. Back then, we weren’t following the crowd, because there wasn’t one yet. But we thought conditions were ripe for it to develop into a larger trend, which it has in a big way. So much so, we use “fever pitch” to describe the current level of interest among big funders, policymakers, and think-tanks. The term is meant as a caution, not a celebration.
If all the talk and attention doesn’t lead to more clarity about how schools can support both academic and social emotional learning, it won’t matter much. Given our role as a funder of lots of new schools—more than one hundred since 2015—we’re in a position to roll up our sleeves with them to try out various approaches, learn quickly, jettison things that don’t work, and share lessons broadly. So far we’ve published two pieces about our work in this area, available here and here.
Checker flagged three important issues: how non-academic indicators are measured, whether schools can help improve them, and whether they should be part of accountability systems. Below are some thoughts on each:
1. In terms of measurement, self-report surveys have limitations, and we acknowledge those. And yet, it’s possible to assess whether they are valid instruments for specific SEL and culture measures. We selected items that had been validated in similar contexts, most of which were already in use with hundreds of thousands of students. Every year, we validate them again in our specific implementation context. However, over the long haul, we need stronger tools for this purpose—ideally ones that generate information about how students are developing based on what they do, not solely on what they say. We’ve conducted small pilots of a couple of such tools, but they aren’t mature enough to roll out further. Like Checker and Rick Hess, we want to see more R & D capital brought to bear on the development of more sophisticated approaches to this measurement challenge. In the meantime, we’ll continue to use the best tools we can find and do our best to be transparent about what they tell us and what they don’t.
2. Can educators help students develop social emotional competencies? In other words, are they malleable in a school context? The short answer: Some are, some are not. As we wrote in our recent report, we asked schools which non-academic indicators they cared about most. They collectively named around sixty things—a mix of cognitive skills, habits, values, traits, and SEL competencies. In order to select a smaller number they could follow together, we used a 3M filter, asking if each indicator was meaningful, defined by a correlation with academic and other longer-term outcomes; malleable in a school context; and measurable in a valid and reliable way in a school setting. The seven social-emotional competencies we selected met a threshold of evidence for these three criteria. Even so, we don’t know nearly as much as we would like to about which interventions and practices help foster improvement for which students. Students in our schools take the survey at the beginning of the school year and again near the end, so we can see if there is growth over time. Throughout the year, teachers receive advice and coaching about relevant research-based practices and interventions they might try. One of our goals is to understand more every year about the most effective ways for educators to support student growth on these dimensions.
3. Should these kinds of measures be incorporated into existing accountability systems? Our answer is a resounding no. The lack of consensus about which non-academic indicators deserve attention, the weak knowledge base about effective practice, and the nascent state of measures and instruments make it a bad idea from a technical perspective. But there are other sound reasons to resist calls to use them for accountability, even as we continue learning from them. For instance, while feverish bandwagons can indeed lead to problems, there’s also something to like about the grassroots interest from parents and educators. Inserting these types of indicators into accountability frameworks will almost certainly shift them into a compliance paradigm, potentially squelching the positive energy and organic demand for them. And finally, like many things, it’s likely that the best and most enduring decisions about which mindsets and habits are valued most will be made in local communities, rather than in distant education agencies. As schools and their communities wrestle with this topic, we hope our work (and that of many others) generates practical guidance about valid approaches, practices, and tools that they can use to implement their decisions.
Our venture philanthropy model means we provide significant supports in addition to grants. When we set out on this path in 2015, we decided we couldn’t wait for others to develop perfect solutions our schools could use to develop and measure their students’ non-academic growth. Instead, we’ve selected the best options available, knowing that what we learn might lead us to commit more deeply to some measures and tools and to discard others. As we continue to learn and share, we’re eager for more dialogue and debate on the best way to support an expanded definition of student success.
Stacey Childress is Chief Executive Officer of NewSchools Venture Fund.
This post originally appeared in Flypaper.