Connecting to Practice

Education Next Issue Cover

How we can put education research to work



By

12 Comments | Print | PDF |

Spring 2016 / Vol. 16, No. 2

Tom Kane talks with Marty West about why education research is not having an impact on education policy on the EdNext Podcast.

This article is part of a new Education Next series commemorating the 50th anniversary of James S. Coleman’s groundbreaking report, “Equality of Educational Opportunity.” The full series will appear in the Spring 2016 issue of Education Next.

ednext_XVI_2_kane_img01In the half century since James Coleman and his colleagues first documented racial gaps in student achievement, education researchers have done little to help close those gaps. Often, it seems we are content to recapitulate Coleman’s findings. Every two years, the National Assessment of Educational Progress (a misnomer, as it turns out) reports the same disparities in achievement by race and ethnicity. We have debated endlessly and fruitlessly in our seminar rooms and academic journals about the effects of poverty, neighborhoods, and schools on these disparities. Meanwhile, the labor market metes out increasingly harsh punishments to each new cohort of students to emerge from our schools underprepared.

At the dawn of the War on Poverty, it was necessary for Coleman and his colleagues to document and describe the racial gaps in achievement they were intending to address. Five decades later, more description is unnecessary. The research community must find new ways to support state and local leaders as they seek solutions.

If the central purpose of education research is to identify solutions and provide options for policymakers and practitioners, one would have to characterize the past five decades as a near-complete failure. There is little consensus among policymakers and practitioners on the effectiveness of virtually any type of educational intervention. We have learned little about the most basic questions, such as how best to train or develop teachers. Even mundane decisions such as textbook purchases are rarely informed by evidence, despite the fact that the National Science Foundation (NSF) and the Institute of Education Sciences (IES) have funded curricula development and efficacy studies for years.

The 50th anniversary of the Coleman Report presents an opportunity to reflect on our collective failure and to think again about how we organize and fund education studies in the United States. In other fields, research has paved the way for innovation and improvement. In pharmaceuticals and medicine, for instance, it has netted us better health outcomes and increased longevity. Education research has produced no such progress. Why not?

Even mundane decisions such as textbook purchases are rarely informed by evidence, even though the federal government has funded curricula development and efficacy studies for years.

Even mundane decisions such as textbook purchases are rarely informed by evidence, even though the federal government has funded curricula development and efficacy studies for years.

In education, the medical research model—using federal dollars to build a knowledge base within a community of experts—has manifestly failed. The What Works Clearinghouse (a federally funded site for reviewing and summarizing education studies) is essentially a warehouse with no distribution system. The field of education lacks any infrastructure—analogous to the Food and Drug Administration or professional organizations recommending standards of care—for translating that knowledge into local action. In the United States, most consequential decisions in education are made at the state and local level, where leaders have little or no connection to expert knowledge. The top priority of  IES and NSF must be to build connections between scholarship and decisionmaking on the ground.

Better yet, the federal research effort should find ways to embed evidence gathering into the daily work of school districts and state agencies. If the goal is to improve outcomes for children, we must support local leaders in developing the habit of piloting and evaluating their initiatives before rolling them out broadly. No third-party study, no matter how well executed, can be as convincing as a school’s own data in persuading a leader to change course. Local leaders must see the effectiveness (or ineffectiveness) of their own initiatives, as reflected in the achievement of their own students.

Instead of funding the interests and priorities of the  academic community, the federal government needs to shift its focus toward enabling researchers to support a culture of evidence discovery within school agencies.

An Evolving Understanding of Causality

In enacting the Civil Rights Act of 1964, Congress mandated a national study on racial disparities in educational opportunity—giving Coleman and his colleagues two years to produce a report. The tight deadline allowed no time to collect baseline achievement data and follow a cohort of children. Moreover, the team had neither the time nor the political mandate to assign groups of students to specific interventions in order to more thoroughly identify causal effects. Their only recourse was to use cross-sectional survey data to try to identify the mechanisms by which achievement gaps are produced and, presumably, might be reversed.

Given the time constraints, Coleman used the proportion of variance in student achievement associated with various educational inputs—such as schools, teacher characteristics, student-reported parental characteristics, and peer characteristics—as a type of divining rod for identifying promising targets for intervention. His research strategy, as applied to school effects, is summarized in the following passage from the report:

The question of first and most immediate importance to this survey in the study of school effects is how much variation exists between the achievement of students in one school and those of students in another. For if there were no variation between schools in pupils’ achievement, it would be fruitless to search for effects of different kinds of schools upon achievement [emphasis added].

In other words, Coleman’s strategy was to study how much the achievement of African American and white students varied depending on the school they attended, and then use that as an indicator of the potential role of schools in closing the gap.

This strategy had at least three flaws: First, if those with stronger educational supports at home and in society were concentrated in certain schools, the approach was bound to overstate the import of some factors and understate it for others. It may not have been the schools, but the students and social conditions surrounding them that differed.

Second, even if the reported variance did reflect the causal effects of schools, the  approach confuses prevalence with efficacy. Suppose there existed a very rare but extraordinarily successful school design. Schools would still be found to account for little of the variance in student performance, and we would overlook the evidence of schools as a lever for change. Given that African Americans (in northern cities and in the South) were just emerging from centuries of discrimination, it is unlikely that any 1960s school systems would have invested in school models capable of closing the achievement gap.

Third, the percentage-of-variance approach makes no allowance for “bang for the buck” or return on investment. Different interventions—such as new curricula or class-size reductions—have very different costs. As a result, within any of the sources of variance that Coleman studied, there may have been interventions that would have yielded social benefits of high value relative to their costs.

Beginning with the leadership of Russ Whitehurst, IES has begun shift- ing its grants and contracts toward those that evaluate interventions with random-assignment and other quasi-experimental designs.

Beginning with the leadership of Russ Whitehurst, IES has begun shift- ing its grants and contracts toward those that evaluate interventions with random-assignment and other quasi-experimental designs.

The fact that some of Coleman’s inferences have apparently been borne out does not mean that his analysis was ever a valid guide for action. (Even a coin flip will occasionally yield the right prediction.) For instance, because there was greater between-school variance in outcomes for African American students than for white students (especially in the South), Coleman concluded that black students would be more responsive to school differences. At first glance, Coleman’s original interpretation seems prescient: a number of studies—such as the Tennessee STAR experiment—have found impacts to be larger for African American students. However, such findings do not validate his method of inference. The between-school differences in outcomes Coleman saw might just as well have been due to other factors, such as varying degrees of discrimination in the rural South.

While there were instances where Coleman “got it right,” in other cases his percentage-of-variance metric pointed in the wrong direction. For example, in the 1966 report, the between-school variance in student test scores was larger for verbal skills and reading comprehension than for math. Coleman’s reasoning would have implied that verbal skills and reading would be more susceptible to school-based interventions than math would. However, over the past 50 years, studies have often shown just the opposite to be true. Interventions have had stronger effects on math achievement than on reading comprehension.

It was not until 2002, 36 years after the Coleman Report, that the education research enterprise finally began to adopt higher standards for inferring the causal effects of interventions. Beginning with the leadership of Russ Whitehurst at the Institute of Education Sciences, IES has begun shifting its grants and contracts away from correlational studies like Coleman’s and toward those that evaluate interventions with random-assignment and other quasi-experimental designs.

As long as that transition toward intervention studies continues, perhaps it is just a matter of time before effective interventions are found and disseminated. However, I am not so confident. The past 14 years have not produced a discernible impact on decisionmaking in states and school districts. Can those who argue for staying the course identify instances where a school district leader discontinued a program or policy because research had shown it to be ineffective, or adopted a new program or policy based on a report in the What Works Clearinghouse? If such examples exist, they are rare.

Federal Funds for Education Research

The Coleman Report is often described as “the largest and most important educational study ever conducted.” In fact, the 1966 study cost just $1.5 million, the equivalent of $11 million today. In 2015, the combined annual budget for the Institute of Education Sciences ($578  million) and the education research conducted by the National Science Foundation ($220 million) was equivalent to the cost of 70 Coleman Reports. Much more of that budget should be used to connect scholarship with practice and to support a culture of evidence gathering within school districts and state agencies.

ednext_XVI_2_kane_fig01-smallAs illustrated in Figure 1, the budget for the Institute of Education Sciences is allocated across four national centers. The largest is the National Center for Education Statistics (NCES), with a total annual budget of $278 million. Roughly half of that amount ($140 million) pays for the National Assessment of Educational Progress, which provides a snapshot of achievement nationally and by state and urban district. Most of the remainder of the NCES budget goes to longitudinal surveys (such as the Early Childhood Longitudinal Study of the kindergarten class of 2011 and the High School Longitudinal Study of 2009) and cross-sectional surveys (such as the Schools and Staffing Survey and National Household Educational Survey). Those surveys are designed and used by education researchers, primarily for correlational studies like Coleman’s.

The National Center for Education Research (NCER) is the second-largest center, with an annual budget of $180 million. NCER solicits proposals from researchers at universities and other organizations. In 2015, NCER received 700 applications and made approximately 120 grants. Proposals are evaluated by independent scholars in a competitive, peer-review process. NCER also funds postdoctoral and predoctoral training programs for education researchers. Given its review process, NCER’s funding priorities tend to reflect the interests of the academic community. The National Center for Special Education Research (NCSER), analogous to NCER, funds studies on special education through solicited grant programs.

The National Center for Education Evaluation and Regional Assistance (NCEE) manages the Education Resources Information Center (ERIC)—an online library of research and information—and the What Works Clearinghouse. NCEE also funds evaluation studies of federal initiatives and specific interventions, such as professional development efforts. In principle, NCEE could fund evaluation studies for any intervention that states or districts might use federal funds to purchase.

The Disconnect between Research and Decisionmaking

While the federal government funds the lion’s share of education research, it is state and local governments that make most of the consequential decisions on such matters as curricula, teacher preparation, teacher training, and accountability. Unfortunately, the disconnect between the source of funding and those who could make practical use of the  findings means that the timelines of educational evaluations rarely align with the information needs of the decisionmakers (for instance, the typical evaluation funded by NCEE requires six years to complete). It also means that researchers, rather than policymakers and practitioners, are posing the questions, which are typically driven by debates within the academic disciplines rather than the considerations of educators. This is especially true at NCER, most of whose budget is devoted to funding proposals submitted and reviewed by researchers. At NCES as well, the survey data collection is guided by the interests of the academic community. (The NAEP, in contrast, is used by policymakers and researchers alike.)

As mentioned earlier, fields such as medical and pharmaceutical research have mechanisms in place for connecting evidence with on-the-ground decisionmaking. For instance, the Food and Drug Administration uses the evidence from clinical trials to regulate the availability of pharmaceuticals. And professional organizations draw from experts’ assessment of the latest findings to set standards of care in the various medical specialties.

To be sure, this system is not perfect. Doctors regularly prescribe medications for “off-label” uses, and it often takes many years for the latest standards of care to be adopted throughout the medical profession. Still, the FDA and the medical organizations do provide a means for federally funded studies to influence action on the ground.

Education lacks such mechanisms. There is no “FDA” for education, and there never will be. (In fact, the 2015 reauthorization of the Elementary and Secondary Education Act reduces the role of the federal government and returns power to states and districts.) Professional organizations of teachers, principals, and superintendents focus on collective bargaining and advocacy, not on setting evidence-based professional standards for educators.

In contrast to the field of education, the Food and Drug Administration and medical organizations provide a means for federally funded research to influence action on the ground.

In contrast to the field of education, the Food and Drug Administration and medical organizations provide a means for federally funded research to influence action on the ground.

By investing in a central body of evidence and building a network of experts across a range of research topics, such as math or reading instruction, IES and NSF are mimicking the medical model. However, the education research enterprise has no infrastructure for translating that expert opinion into local practice.

To be fair, IES under its latest director, John Easton, was aware of the disconnect between scholarship, policy, and practice and attempted to forge connections. NCES, NCER, and NCEE all provide some amount of support for state and local efforts, as Figure 1 highlights. For instance, the majority of NCEE’s budget ($54 million out of a total of $66 million) is used to fund 10 regional education labs around the country. Each of the labs has a governing board that includes representatives from state education agencies, directors of research and evaluation from local school districts, school superintendents, and school board members. However, the labs largely operate outside of the day-to-day workings of state and district agencies. New projects are proposed by the research firms holding the contracts and must be approved in a peer review process. For the most part, the labs are not building the capacity of districts and state agencies to gather evidence and measure impacts but are launching research projects that are disconnected from decisionmaking.

While NCEE is at least trying to serve the research needs of state and local government, less than 13 percent of the NCES and the NCER budgets is allocated to state and local support. NCES does oversee the State Longitudinal Data Systems (SLDS) grant program. With funding from that program, state agencies have been assembling data on students, teachers, and schools, and linking them over time, making it possible to measure growth in achievement. Those data systems—developed only in the past decade—will be vital to any future effort by states and districts to evaluate programs and initiatives. However, the annual budget for the SLDS program is just $35 million of the total NCES budget of $278 million. For the moment, the state longitudinal data systems are underused, serving primarily to populate school report-card data for accountability compliance.

NCER sets aside roughly $24 million per year for the program titled Evaluation of State and Local Programs and Policies, under which researchers can propose to partner with a state or local agency to evaluate an agency initiative. Such efforts are the kind that IES should be supporting more broadly. However, because the program is small and it is scholars who know the NCER application processes, such projects tend to be initiated by them rather than by the agencies themselves. It is not clear how much buy-in they have from agency leadership.

A New Emphasis on State and Local Partnerships

IES must redirect its efforts away from funding the interests and priorities of the research community and toward building an evidence-based culture within districts and state agencies. To do so, IES needs to create tighter connections between academics and decisionmakers at the state and local levels. The objective should be to make it faster and cheaper (and, therefore, much more common) for state and local leaders to pilot and evaluate their initiatives before rolling them out broadly.

Taking a cue from NCER’s program for partnerships with state and local policymakers, IES should offer grants for researchers to evaluate pilot programs in collaboration with such partners. But to ensure the buy-in of leadership, state and local governments should be asked to shoulder a small portion (say, 15 percent) of the costs. In addition, one of the criteria for evaluating proposals should be the demonstrated commitment of other districts and state agencies to participate in steering committee meetings. Such representation would serve two purposes: it would increase the likelihood that promising programs could be generalized to other districts and states, and it would lower the likelihood that negative results would be buried by the sponsoring agency. As the quality and number of such proposals increased, NCER could reallocate its research funding toward more partnerships of this kind.

Especially now that the federal government is returning power to states under the Every Student Succeeds Act, signed into law by President Obama on December 10, 2015, federal research efforts should be refocused to more effectively help states and districts develop and test their initiatives.

Especially now that the federal government is returning power to states under the Every Student Succeeds Act, signed into law by President Obama on December 10, 2015, federal research efforts should be refocused to more effectively help states and districts develop and test their initiatives.

However, if the goal is to reach 50 states and thousands of school districts, our current model of evaluation is too costly and too slow. If it requires six years and $12 million to evaluate an intervention, IES will run out of money long before the field runs out of solutions to test. We need a different model, one that relies less on one-time, customized analyses. For instance, universities and research contractors should be asked to submit proposals for helping state agencies and school districts not just in evaluating a specific program but in building the capacity of school agencies for piloting and evaluating initiatives on an ongoing basis. The state longitudinal databases give the education sector a resource that has no counterpart in the medical and pharmaceutical industries. Beginning with the No Child Left Behind Act of 2001, U.S. students in grades 3 through 8 have been tested once per year in math and English. That requirement will continue under the 2015 reauthorization bill, the Every Student Succeeds Act. Once a set of teachers or students are chosen for an intervention, the state databases could be used to match them with a group of students and teachers who have similar prior achievement and demographic characteristics and do not receive the intervention. By monitoring the subsequent achievement of the two groups, states and districts could gauge program impacts more quickly and at lower cost. The most promising interventions could later be confirmed with randomized field trials. However, recent studies using randomized admission lotteries at charter schools and the random assignment of teachers has suggested that simple, low-cost methods, when they control for students’ prior achievement and characteristics, can yield estimates of teacher and school effects that are similar to what one observes with a randomized field trial. Perhaps nonexperimental methods will yield unbiased estimates on other interventions as well.

In addition, IES should invite universities and research firms to submit proposals for convening state legislators, school board members, and other local stakeholders to learn about existing data on effective and ineffective programs in a particular area, such as preschool education or teacher preparation. IES should experiment with a range of strategies to engage with state and local agencies, and as effective ones are found, more of its budget should be allocated to such efforts.

Although the federal government provides the lion’s share of research funding in education, state and local governments make a crucial contribution. Until recently, the primary costs of many education studies—including the Coleman Report—derived from measuring student outcomes and, in the case of longitudinal studies, hiring survey research firms to follow students and teachers over time. With the states investing $1.7 billion annually on their assessment programs, much of that cost is now borne by states and districts.

Reason for Optimism

Fifty years after the Coleman Report, racial gaps in achievement remain shamefully large. Part of the blame rests with the research community for its failure to connect with state and local decisionmakers. Especially now that the federal government is returning power to states under Every Student Succeeds, federal efforts should be refocused to more effectively help states and districts develop and test their initiatives. The stockpiles of data on student achievement accumulating within state agencies and districts offer a new opportunity to engage with decisionmakers. Local leaders are more likely to act based on findings from their own data than on any third-party report they may find in the What Works Clearinghouse. If the research community
were to combine IES’s post-2002 emphasis on evaluating interventions with more creative strategies for engaging state and local decisionmakers, U.S. education could begin to make more significant progress.

There is reason for optimism. Indeed, the Coleman Report’s conclusion that schools had little hope of closing the achievement gap has been proven unfounded. In recent years, several studies using randomized admission lotteries have found large and persistent impacts on student achievement, even for middle school and high school students. For instance, students admitted by lottery to a group of charter schools in Boston increased their math achievement on the state’s standardized test by 0.25 standard deviations per year in middle school and high school. Large impacts were also observed on the state’s English test: 0.14 standard deviations per year in middle school and 0.27 standard deviations per year in high school. Similarly, a Chicago study of an intensive math-tutoring intervention with low-income minority students in 9th and 10th grades suggested impacts of 0.19 to 0.31 standard deviations—closing a quarter to a third of the achievement gap in one year. Now that we know that some school-based interventions can shrink the achievement gap, we need scholars to collaborate with school districts around the country to develop, test, and scale up the promising ones. Only then will we succeed in closing the gaps that Coleman documented 50 years ago.

Thomas J. Kane is the Walter H. Gale Professor of Education and faculty director of the Center for Education Policy Research at Harvard University. 




Comment on this article
  • Peter Meyer says:

    “In other fields, research has paved the way for innovation and improvement. In pharmaceuticals and medicine, for instance, it has netted us better health outcomes and increased longevity. Education research has produced no such progress. Why not?” I hate to say it, but the answer is pretty obvious: these other fields are not run by politicians, bureaucrats, and unions. When schools are allowed to put children (and their educations) first, all manner of good research can be — and is — put into practice.

  • Jane Jackson says:

    Pre- and posttests of research-based concept inventories are MUCH better than standardized tests, to determine the effectiveness of an intervention in high school. The best-known example is in physics, where the Force Concept Inventory has measured the effectiveness of teaching methods including Harvard’s Peer Instruction, high school Modeling Instruction developed at Arizona State University, Workshop Physics from Dickinson College, and ISLE from Rutgers.

  • H.D. Horvath says:

    Respectfully, while I agree that a better research model/process make sense, I don’t think it will make the difference. My 24 years spent working with schools to improve achievement has convinced me the evidence is clear and plentiful. I would argue that it’s manifestly evident in the good work of schools like Success Academy, KIPP, AchievementFirst and a number of others who consistently improve student academic success year after year in the most challenging neighborhoods in schools attended by the country’s most needy students. What’s missing is the “will”. And it’s missing in part because helping students who are living in poverty and often three academic years behind is hard. And hard isn’t for everybody. One needs to look no further than the local Planet Fitness to be reminded that good intentions, or in this case, a little more evidence, don’t and won’t translate into better results. That takes hard work and commitment effectively implementing the proven approaches and models already available. Again, this is clearly not for everybody and that includes students, parents, and the professionals in our school systems nationwide. But I sincerely believe it’s what’s necessary, now more than ever. And to succeed it will take more people like Dr. Kane who’ve made it their life’s work to help students everywhere by attempting to improve on a largely ineffective system and make it better.

  • Peter Meyer says:

    Interesting corollary here: http://www.nytimes.com/2016/02/23/science/mark-willenbring-addiction-substance-abuse-treatment.html

    “When we publish studies in our field, nobody who is running these [addiction] centers reads them. If it counters what they already know, they discount them,” he continued. “In the addiction world, the knee-jerk response is typically, ‘We know what to do.’ And when that doesn’t work, we blame patients if they fail.”

    And I would suggest to HD Horvath that we need to find incentives to will. Will power is a skill; it can be learned and practiced. The answer is somewhere between the rod and cane and a flood of blue ribbons.

  • H.D. Horvath says:

    I agree with Mr. Meyer and would offer a slight change to the recommendation: “When schools are (effectively incentivized) to put children and their educations first, all manner of good research can be — and is — put into practice”. I sincerely long for that day.

  • Anne Clark says:

    I see no appetite for evidence in the math education community. There is only one study that the WWC has found that meets its standards – and with reservations at that – for Everyday Math, one of the most widely used math textbook series. And there are no studies that meet WWC standards for Singapore Math.

    Why? Because our Colleges of Education on the whole preach an alternative reality where each teacher knows best and should have autonomy in their classroom.

    So I ask, please, if the panelists for tomorrow’s discussion know of any state which is breaking this paradigm, please discuss how they are doing it, and how we can get more SEAs to step-up. Thanks!

  • Anne Clark says:

    After listening to the podcast, I think I may be able to answer my own question above.

    In NJ, there is a new initiative led by Special Assistant to the Commissioner Bari Erlichson to help the neediest Districts use their PARCC data to identify gaps and change their classroom practices. If she layers on top of her analysis which “interventions” (i.e., textbook series for math in particular) each school is using, then her work may produce exactly the kind of localized “research” that you are calling for.

    Let’s hope this filters back to the Colleges of Education so they can participate or drive more such initiatives, and better prepare teachers to be consumers of data.

  • Sandra Stotsky says:

    Since Massachusetts is widely regarded as the one state that showed increases in academic achievement for all demographic groups in several subject areas lasting for over a decade (on NAEP and on TIMSS), why no studies of what was done there based on information from those who were in a position to know?

    Sandra Stotsky

  • Jeff Valentine says:

    At the crux of this article is concern about the disconnect between education research and practice. As another commenter noted, this is not unique to the education world. Tom sees lots of blame to go around. On the supply side, he questions the relevance and timeliness of academic research. On the demand side, he (less judgmentally) perceives a policy/administrative preference for local evidence over evidence gathered elsewhere. His solution is to invest more in building connections between academic researchers and local decision makers (surely this is a good thing), with the main goal of improving local evaluations. This is where Tom loses me.

    It is really hard for me to see how a school, or even a school district, can use experimentation to help them make decisions, and Tom is silent about the challenges here. Let me articulate a couple of these. First, it is not at all clear to me that most administrators care about evidence, and among those who do, a solid portion prefer to rely on teacher/parent/student reports about what seems to be “working” (i.e., evidence only in the most general sense). The trouble of course is that gathering qualitative evidence is much, much harder than it seems. Of course, the related point is that doing good experimental research is hard. Here I believe Tom greatly understates the scope of the bias problem – yes well measured pretests help a lot, but some residual bias is expected, and this bias may be large relative to the observed effects (which will likely be small). And finally, if inferential statistics are used in the decision making process statistical power will often be a problem.

    In terms of possible solutions, a Bayesian approach to data analysis might help, though it is hard to see that happening anytime soon. Another help would help to have better and more micro level data (e.g., short skill assessments multiple times per week) as opposed to relying almost solely on high stakes achievement tests to measure progress. Finally, rather than the dominant between group thinking (i.e., some students receive an intervention, others receive what they would have received in the absence of the intervention), it might help to move to an interrupted time series approach. Doing so would require a lot of well-measured data, and a lot of capacity building (my guess is that only a tiny percentage of individuals with a doctoral degree in education, psychology, and sociology receive anything other than a cursory introduction to these methods in their coursework; at least, this was my experience). But this is just about the only hope I see for using locally generated research to address problems.

    Finally, I would be remiss not to mention that sensibly synthesizing multiple studies is likely a better approach to figuring out what will work in any given context than either (a) using a single study from someplace else or (b) trying to experiment in the local setting. The issue is that every study is idiosyncratic in multiple ways, and as the number of studies on a particular question increases, some of these will cancel each other out. What remains is a more robust, and more generalizable, evidence base than that afforded by any one study in isolation.

  • Brian Flay says:

    I think Tom identified the core issue in this one sentence: “Professional organizations of teachers, principals, and superintendents focus on collective bargaining and advocacy, not on setting evidence-based professional standards for educators.” Professional organizations in medicine are more concerned with keeping up with the latest science (including, but not limited to that from the FDA) in order to maximize patient health. But, there are multiple other reasons for the influence that the FDA has on medical practice. First, medical training is provided in university-based medical schools with rich contexts of ongoing research. Second, licensing exams after the first two years of medical school ensure that all future physicians understand the basics of the science taught therein (see https://www.petersons.com/graduate-schools/synopsis-medical-school-requirements.aspx). Third, medical training takes place in the context of medical practice in the form of rotations. Fourth, medical school graduates still have to obtain more practice in the form of two- to four-year residencies. Finally, they then pass the board exams to get certified in the state in which they want to practice; which many physicians renew (recertify) one or more times during their career to ensure that they keep up to date with the latest standards (see http://www.kaptest.com/blog/med-school-pulse/2015/06/02/life-medical-school-career-medicine/). Ongoing training opportunities abound and are required in the form of CE (continuation education) credits.

    Thus, in contrast to graduates of teacher training, graduates of medical training value keeping current with scientific findings, have the knowledge and skills to read and make sense of the science in the medical literature, and value opportunities to learn about new developments and how to put them into practice. What education needs is, then, not only a FDA for education (which I think the WWC could become, but it should be separate from the IES, just as the FDA is separate from the NIH), but also an infrastructure of teacher training similar to medical training, that also would need to inculcate valuing research and using evidence-based educational strategies, textbooks, programs – and the continuing education needed to accomplish all of this. Teacher training programs would need to attract high-caliber trainees, and then provide them with a higher level of training akin to that provided to physicians. Teachers then need to be valued and paid on scales similar to physicians, not at the near-poverty levels common today. Education administrators should be the cream of the crop, with at least PhDs in education as well as some years of practice. Obviously, all of this would take a major infusion of funding for major overhauls of the education systems of training and research. The education enterprise is far larger than medicine at the practitioner level, but currently far, far smaller at the level of funding for research and training. Our education system will never improve substantially until these gaps are rectified.

  • James Clinger says:

    There is a huge divide between the research on education done by people with a background in schools of education and the education research done by social scientists. I don’t think the source of funding is a big issue. The ed school people do not generally read the work by social scientists, and the social scientists only off-and-on read the work by the ed school people. Most of the ed school work does not meet WWC research design standards, nor does it meet general social science methodological expectations. The ed school people often don’t have the statistical skill to understand much of the social science research, and many would be disinclined to accept the results if they did, particularly if the findings differed from the conventional wisdom taught in the ed schools. Obviously, the practitioners in the schools and in education agencies have an ed school background, not a social science background. This divide represents a huge institutional barrier to education reform. The very people who need to receive the benefit of the best education research have neither the training to understand it nor the inclination to accept it. I don’t think substantial reform of our education system will be achieved until that barrier is torn down.

  • Anne Rothstein says:

    James Clinger states: “The very people who need to receive the benefit of the best education research have neither the training to understand it nor the inclination to accept it. I don’t think substantial reform of our education system will be achieved until that barrier is torn down.” I have believed for some time that education needs a group of research into practice translators who deeply understand both research and practice, interact regularly with both investigators and teachers and can communicate effectively with both groups. Such Theory/Research into Practice professionals would serve to begin the dialogue that may prove effective in stimulating both groups to talk to each other with greater respect.

  • Comment on this Article

    Name ()


    *

         12 Comments
    Sponsored Results
    Sponsors

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform

    Sponsors