Studying Teacher Moves

Education Next Issue Cover

A practitioner’s take on what is blocking the research teachers need


29 Comments | Print | PDF |

WINTER 2012 / VOL. 12, NO. 1

In July 2011, Bill Gates told the Wall Street Journal, “I believe in innovation and that the way you get innovation is you fund research and you learn the basic facts…. I’m enough of a scientist to want to say, ‘What is it about a great teacher?’”

As a “practitioner” of sorts, I’ve wondered the same thing for 15 years. The K–12 school sector generates little empirical research of any sort. And of this small amount, most is targeted to policymakers and superintendents, and concerns such matters as the effects of class size reduction, charter school attendance, or a merit-pay program for teachers. Why is there virtually no empirical education research meant to be consumed by the nation’s 3 million teachers, answering their questions?

Those 3 million teachers generate about 2 billion hour-long classes per year. We do not know empirically which “teacher moves,” actions that are decided by individual teachers in their classrooms, are most effective at getting students to learn. Why doesn’t this kind of research get done?

Mr. Gates has part of the answer. Money. For 2011, the Microsoft R&D budget is $9.6 billion, out of total revenue in the $60 billion range. The U.S. Department of Education’s Institute of Education Sciences (IES) represents only a fraction of total education research, but its budget gives some perspective: IES spends about $200 million on research compared to more than $600 billion of total K–12 spending. So, 15 percent to upgrade Microsoft, 0.03 percent to upgrade our nation’s schools. And while Microsoft’s research is targeted to the bottom line ($8.6 billion is on cloud computing, the profit center of the future), IES spends almost nothing examining the most important aspect of schools: the decisions and actions that individual teachers control or make.

One IES project is the What Works Clearinghouse (WWC), established in 2002 to provide “a central and trusted source of scientific evidence for what works in education.” The WWC web site lists topic areas like beginning reading, adolescent literacy, high school math, and the like. For each topic, WWC researchers summarize and evaluate the rigor of published studies of products and interventions. One might find on the WWC site evidence on the relative effectiveness of middle-school math curricula or of strategies to encourage girls in science, for example.

But there is almost nothing examining the thousands of moves teachers must decide on and execute every school day. Should I ask for raised hands, or cold-call? Should I give a warning or a detention? Do I require this student to attend my afterschool help session, or make it optional? Should I spend 10 minutes grading each five-paragraph essay, 20 minutes, or just not pay attention to time and work on each until it “feels” done?

And the WWC’s few reviews of research on teacher moves aren’t particularly helpful. A 63-page brief on the best teaching techniques identifies precisely two with “strong evidence”: giving lots of quizzes and asking deep questions. An 87-page guide on reducing misbehavior has five areas of general advice that “research supports,” but no concrete moves for teachers to implement. It reads, “[Teachers should] consider parents, school personnel, and behavioral experts as allies who can provide new insights, strategies, and support.” What does not exist are experiments with results like this: “A randomized trial found that a home visit prior to the beginning of a school year, combined with phone calls to parents within 5 hours of an infraction, results in a 15 percent drop in the same misbehavior on the next day.” If that existed, perhaps teachers would be more amenable to proposals like home visits.

By contrast, a fair number of medical journals get delivered to my house. They’re for my wife, an oncologist. They’re practical. In each issue, she learns something along these lines: “When a patient has this type of breast cancer, I currently do X. This study suggests I should do Y.” There is a bit on medical policy, but most of the information is meant for individual doctors in their day-to-day work.

That’s not to say that we shouldn’t conduct research on education policy. My own work has certainly benefited from it. For example, the quasi-experimental study by economists Tom Kane and Josh Angrist on Boston charter schools, which compared the winners and losers of charter admission lotteries, helped change the Massachusetts law that had blocked the creation of new charters. The change enabled me to help launch a new charter school, MATCH Community Day. My point is simply that relative to education policy research, there is very, very little rigorous research on teacher moves. Why? Gates knows it’s more than a lack of raw cash; it’s also about someone taking responsibility for this work. “Who thinks of it [empirical research on teachers] as their business?” he asked. “The 50 states don’t think of it that way, and schools of education are not about [this type of] research.”

I agree, but I contend there are a number of other barriers. The first is a lack of demand.

The Demand Side

Why aren’t teachers clamoring for published research? One reason is that researchers generally examine the wrong dependent variable. Researchers care about next August (when test scores come in, because they can show achievement gains). Teachers care about that, too, but they care more about solving today’s problems (see sidebar, page 26).

A second issue is that researchers don’t worry about teacher time. Education researchers often put forward strategies that make teachers’ lives harder, not easier. Have you ever tried to “differentiate instruction”? When policy experts give a lecture or speak publicly, do they create five different iterations for their varied audience? Probably not.

The return on investment for teacher time and the opportunity cost of spending it one way rather than another is rarely taken into account. In what other, valuable ways could teachers be spending the time taken up with building “differentiation” into a lesson plan? They could phone parents, tutor kids after school, grade papers, or analyze data. Much research implies that teachers should spend more time doing X while not indicating where they should spend less time.

Teachers don’t trust research, and understandably so. There’s a lot of shoddy research that supports fads. Experienced teachers remember that “this year’s method” directly contradicts the approach from three years ago. So they’d rather go it alone. Newer teachers pick up on the skepticism about research from the veterans.

Unlike medical research, teacher research rarely examines possible side effects, and whether they are short-term aggravations or can be expected to persist. Imagine that a teacher reads an article arguing that students benefit from being asked “higher-order questions.” She begins doing that. Some students, surprised at this new rigor, are frustrated. Some students throw up their hands and give up. Misbehavior ensues.

Student frustration is probably a fairly predictable short-term side effect of asking higher-order questions. If she isn’t being properly warned, a teacher might quickly abandon this technique.

For all these reasons, the 3 million teachers aren’t forming picket lines to demand research.

Do We Know What Works?

Neither policy camp, reformers nor traditionalists, care much about research into teacher moves, either. Some traditionalists see teaching as an art, one that cannot be subjugated to quantitative analysis (“every teacher is different”). Others aren’t averse to research; they simply don’t see it as a priority. They’d prefer that limited resources be used to fight poverty, not to improve students’ day-to-day classroom experiences.

Meanwhile, some reformers argue “we already know what works,” and we just need to scale it.

As part of the “reformer” community, I find this troubling. From charter opponents like Diane Ravitch to supporters like education secretary Arne Duncan, there’s agreement that “some charter schools work.” Furthermore, there’s strong evidence that the charters that succeed tend to be “No Excuses” schools. So do we know what works?

I’m the founder of one of those charter schools; our high-school students have the highest value-added gains of all 340 public high schools in Massachusetts. I’m also the founder of a small teacher residency program that supplies teachers to schools like KIPP (Knowledge Is Power Program). Many of us would agree to a very different proposition: We know teacher moves “that work” to some extent, enough to create very large achievement gains, but we don’t know teacher moves well enough to get our college graduation rate near where we’d like it to be. Nor do we know how to help teachers do these moves more efficiently, so that their jobs are sustainable.

Without a massive uptick in our knowledge of teacher moves, we’ll continue on the current reform path. That path is a limited replication of No Excuses schools that rely on a very unusual labor pool (young, often work 60+ hours per week, often from top universities); the creation of many more charters that, on average, aren’t different in performance from district schools; districts adopting “lite” versions of No Excuses models while pruning small numbers of very low performing teachers; and some amount of shift to online learning. Peering into that future, I don’t see how we’ll generate a breakthrough.

Bridging the Divide

The final barrier to research on teacher moves is the divide between practitioners and researchers. My analogy is a 5th-grade dance. Boys stand on one side. Girls stand on the other. There is very little actual dancing. In this case, teachers are off to one side, and quantitatively oriented researchers are on the other.

After a while, the boys go into the hallway and talk about video games. Similarly, quantitative researchers find the transaction costs of setting up experiments are too high and give up on doing research about teacher actions. They take their problem-solving marbles and find other data sets to crunch.

Girls see that the boys aren’t around anymore. So they dance with each other. Teachers and school leaders, if they like to learn, do so through observation of and conversation regarding perceived “best practices.” There aren’t many practitioners who care about rigorous empirical research.

With all these barriers, is there much hope? There’s not going to be a pot of gold in this funding environment. If research on teacher moves matters, we need to be more creative about catalyzing the low-hanging fruit. That would mean identifying practitioners who are unusually interested in randomized research, and connecting them with doctoral students who are unusually interested in teachers and teaching.

What does it look like when practitioners and researchers dance together? Here is one example.

In July 2010, I asked Harvard economist Roland Fryer for some help. My research question was fairly simple: Do teacher phone calls to parents “work”?

In our school, teachers proactively phone parents. Typically, the parents have not been heavily involved in their children’s previous schools. We believe that phone calls to parents help teachers generate improved decorum, effort, and ultimately learning from students. (Sometimes the calls to parents are supplemented with teacher calls to students) These parent relationships seem to be linked to very high parent-satisfaction ratings, and in turn we have thought those were related to our high test-score growth. Truth be told, however, we just don’t know whether this is a productive use of teachers’ time.

Fryer enlisted two doctoral students, Shaun Dougherty and Matt Kraft, from the Quantitative Policy Analysis in Education program at the Harvard Graduate School of Education. These two did an amazing job, operating skillfully within our school to do the randomized study. From their findings:

“On average, teacher-family communication increased homework completion rates by 6 percentage points and decreased instances in which teachers had to redirect students’ attention to the task at hand by 32%.”

This collaboration worked for several reasons. First, we have a teacher residency embedded in our charter school, so I had 24 student teachers who could be fairly easily randomized during the summer school session. Second, a professor I trusted chose the graduate students who would conduct the research. These guys were, in my view, dispassionate. I’ve tried to work before with grad students who have strong preexisting beliefs about what they’ll find (typically with a “progressive” lens), and it was difficult to gain real knowledge. (Researchers often feel the same way about practitioners, that we’re searching for marketing, not truth). Also, Fryer paid them a stipend; in my experience, graduate students working for free, and only for credit of some sort, don’t always follow through.

The cost of the two graduate students was not the only expense. In our experiment, at any given time, there were 16 classrooms in action. The researchers needed to hire 16 observers to carefully code student behavior for a few weeks. The total bill was around $10,000. Kraft and Dougherty found a Harvard grant of $1,000. The rest I needed to pay.

Once we’d designed the experiment, I needed to explain it to my team: the principals of our high school and middle school, and the student teachers who were involved. These are people I know well, and they generally trust me. Still, this buy-in phase required expending both time and “relationship capital,” a resource that gets spent down and must be built back up over time. Using student teachers was also of benefit. It would have been tough to randomize our regular teachers. Their belief in the efficacy of parent communication is so strong I suspect many would have doubted the value of changing their normal routines.

There were other costs to the experiment. The head of our teacher-prep program spent many hours handling the experiment’s complex logistics, including a permission slip for parent consent. He could have spent those hours coaching these student teachers, which is the main task I was paying him to do.

All of these issues reflect transaction costs: finding the right people and then doing the right study well takes time, effort, and money.

Researching Teacher Moves

Think of the Human Genome Project. When the project started, scientists didn’t know how many genes there were; now they believe the number is 20,000 to 25,000.

We don’t know how many teacher moves there are. The number is certainly high but not infinite, maybe 200, 2,000, nobody knows. Presumably, there are some unusually high-yield teacher moves across all contexts, some moves that are high yield but only in specific situations or contexts, and other less powerful moves. There is undoubtedly lots of interaction effect among many moves. Mapping all of this might be called the Teaching Move Genome Project, and at the beginning it would be a scary undertaking.

Absent this work, what do we have? Perceived best practices, often buttressed by observation or nonrandomized studies. In his best-selling book Teach Like a Champion, Doug Lemov describes 49 teaching moves he has observed in the nation’s top charter schools. At the University of Michigan, Deborah Ball and her colleagues are close to unveiling a list of 88 math teacher moves. Lee Canter’s Assertive Discipline and Jon Saphier’s Skillful Teacher discuss scores of moves, like the “10-2” rule (have kids summarize for 2 minutes in small groups after 10 minutes of teacher-led instruction), much of it supported by nonrandomized research. On the basis of its observations of effective teachers, Teach For America (TFA) promotes 6 teacher behaviors and 28 component parts, like “plan purposefully” or “set big goals”; none are specific moves.

What would a series of randomized trials look like? Let’s apply it to Lemov’s 49. Imagine a group of trials that would ask the questions, Do all of the moves work? Are any particularly successful? How does the degree of teacher buy-in interact with effectiveness? What are the “costs” of these moves?

An example from Lemov is “Right Is Right.” The idea is that when a kid gives an answer that is mostly right, the teacher should hold out until it’s 100 percent correct. Lemov describes various tactics the teacher can use to elicit the 100 percent right answer from the student (or first from another student, before having the original student repeat or extend the correct answer).

The obvious cost of implementing this move is time. These back-and-forths add up to lost minutes each period when other topics are not being discussed. A less skillful teacher might be drawn into a protracted discussion, when her next best alternative (simply announce the 100 percent right answer, and move on) might work better. We just don’t know.

Back in 2003, education researchers David Cohen, Stephen Raudenbush, and Deborah Ball argued that “one could make accurate causal inference about instructional effects only by reconceiving and then redesigning instruction as a regime, or system, and comparing it with different systems.” That suggests “a narrower role for survey research than has recently been the case in education, and a larger role for experimental and quasi-experimental research. But if such studies offer a better grip on causality, they are more difficult to design, instrument, and carry out, and more costly.”

Still, we need a better grip on causality. So who would undertake this cost?

A Proposal

Once again borrowing some terminology from medicine, I propose a typology of trials, delineating phases in a continuum.

Phase 1 trials would be small, nongeneralizable empirical studies of teacher moves. These could be randomized, single-subject, or regression discontinuity, but the dependent variable would not be year-end test scores. Instead, we’d look for next-day or next-week outcomes: measurable effects on student behavior, effort, or short-term learning.

Who would decide what moves to test? Some would be proposed by established authors and thinkers in the teaching field. Some would come from the nation’s 3 million schoolteachers, possibly with crowd sourcing to identify the most-promising ideas. Some would come from academic researchers, particularly those from other fields, like psychology, who may offer unusual insights. But for the next level, testing competing ideas, I’d suggest we draw heavily on teacher opinion, particularly a group of teachers selected for their stated willingness to try new methods (if they are supported by research).

Phase 2 trials would test promising teacher practice from Phase 1 on a larger, more varied teacher pool to see if the next-day outcomes held up, probably across different types of schools. Again, the dependent variable is short-term student response.

Phase 3 trials would be randomized trials in which teachers combine multiple moves that emerge from Phase 2. In the end, our bottom line is student learning, and Phase 3 trials are combinations of moves that are measured to see if they bolster year-end student learning gains.

Medical researchers have found that treating some illnesses requires a drug “cocktail,” that is, no one medicine by itself works as well as the combination of several. The same approach might work in education: it could be that individual teacher moves by themselves cannot create measurable year-end achievement gains in students, but combining many together can.

My proposal is that each of the nation’s 1,200-plus schools of education and teacher prep programs conduct one randomized trial on a teacher move each year: Phase 1, Phase 2, or Phase 3. They’d do that by recruiting alumni into a network of experienced teachers willing to participate. The advantage is that once you pay the one-time transaction costs of finding these teachers, the ongoing expenditures related to persuading them to participate, and securing permission from families and principals, decline.

Once that network existed, it would function like a laboratory. Various Phase 1 experiments could be run through it, with small numbers of teachers at first, so that many experiments could be run concurrently. Larger numbers of teachers would be included in more promising Phase 2 validation experiments. Of course, there would be selection bias in terms of which teachers are willing to be participate in this sort of work, and other imperfections. But in the end, experiments could build on proven results from previous ones. Multiple ed schools would combine their networks for Phase 3 trials.

By itself, no single experiment would be that important. Instead, it would be like cancer research: thousands of people each trying to answer small questions in a very rigorous way…which would add up to promising treatments.

The goal is an affordable system for conducting teacher research that teachers would actually consume, that would address both the implementation challenges and the high transaction costs for researchers and practitioners in creating such research. Until that exists, I’ll see you at the 5th-grade dance.

Michael Goldstein is the founder of MATCH Charter School and MATCH Teacher Residency, in Boston.

Comment on this article
  • Paul Cat says:

    Good article. I kept telling my college professors that the process of education has not change much in 3000 + years. There is an instructor who instructs and a student who learns. The methodology is really the only thing that has changed.

  • Jim Kohlmoos, Knowledge Alliance says:

    Terrific article here! The meager investment in R&D in education is a huge reason why research is not finding its way to practitioners. But establishing dynamic two-way partnerships between practitioners and researchers is also essential. The new regional educational laboratories that will be unveiled in a couple of months will focus big time on research alliances as suggested in the article. Thanks for writing this article.

  • Ben Riley says:

    Great article, Mike, and not much I disagree with. But to play devil’s advocate for a moment, I do wonder if one of your main contentions — that we need to study “teacher moves” — is consistent with other professions, at least outside of medicine. I practiced law for seven years and I’m hard pressed to think of a single time I ever consulted a legal periodical to learn a new “lawyer move.” And before you scream “that’s because lawyers are paid by the hour and therefore inefficiency is rewarded!”, I worked for the government for half the time, so that can’t be the entirety of the reason. And I personally think teaching more closely resembles lawyering than it does doctoring — although all three involve the art of instruction, only the practice of medicine is undergirded by hard science.

  • Connie Elmore says:

    Education has become to complicated. Simplify. 1. Define success. I would think it would have to be connected with serving others and happiness. Of course serving others invariably leads to happiness. 2. Consult retired teachers. There is nothing second to experience. 3. View the student and parents as customers. 4. Inspire. I would say that should be a number one objective in education. 5. Anticipation. Anticiapation is now lost for the children regarding reading. The myth -the younger the better to educate is false. Children are no longer given the advantage of waiting for the great day to read and have a math workbook…..Make them wait and make it sound so grand that when they are all ready and all can succeed, about 8 or 9, let them Read!!!!!!! The waiting will motivate them. Of course we read to them daily and make them love great literature. 5. Decide what is important to American education. Such as worthy home membership, vocation, and citize ship. 6. Teach the truth of the founding of our nation…google David Barton. 7. Cultivation of personal and social interests.8. Prieomote good health. 9. Command of fundamental processes. (3 Rs) 9. Promoting ethical character. INDIVIDUAL DIFFERENCES in students and the varied needs of society must be taken into consideration and honored. I enjoy eating out much more than going to a doctor or lawyer. What if a student chooses food service. Yeah….google successful people that did not go to college. Just becasue we educators went to college does not mean that not going is a sin. SIMPLIFY….As far as evaluations go…evaluate backwards…parent and student evaluate all. Teacher evaluate principal, after all the principal is serving teacher and students and parents. Principals, teachers, parents, and students evaluate supt., the supt. serves each of these. We have things backwards. As far as teacher moves go……love, inspire, pray, care, indiviualize, guide gently, value students and parents, listen. It is a matter of effectiveness when your students are embraced by these things. Seriously, love those we serve. Simplify. Teacher moves- find the gift each child possesses. Testing…..Albert Einstein…We are all a genius. We must not judge a fish by how well he climbs a tree. He will think he is stupid he entire life….We teach the fish also. Honor them. Test for college ready only, kills the fish. Value their talents also. Love to all.

  • Mike Goldstein says:

    Thanks Paul.

    Jim — look fwd to the unveiling.

    Ben — for the gov’t lawyers, inefficiency is required, not rewarded. Ba dum dum. There actually is some interesting empirical research on jury selection. But I’ll trust your judgment on that profession.

    Perhaps one way to call the question is this. Let’s set aside analogies to lawyering. If you were teaching 7th grade math, and there existed research on teacher moves that you felt was empirically strong, would you try those moves?

    And even if not: would you object to colleagues who did covet and use that research (while you continued to simply trust your gut on all decisions)?

  • Cal says:

    In terms of your research, was all the research done at a charter school? Because a call from a charter school about student behavior would presumably be more effective than a call from a public school. Charter schools can give kids the boot (or “counsel out”) more easily.

    Furthermore, homework completion in and of itself isn’t anything unless it improves demonstrated performance (which is a big deal in math). Did it improve test scores?

    Because that, too, is one of the big problems in researching “what works”. There’s lots of supposed proxies for performance, particularly homework and engagement, that don’t necessarily improve performance.

  • Britt Gow says:

    Thanks for this thought provoking article – most of which I agree with. However, I think an important aspect of teaching has been omitted. Knowing your students well and how they learn best is a critical way to improve learning outcomes, but is difficult to research. All students are individuals, as are teachers, and the complex interactions and dynamics in every classroom cannot be quantified easily. Teaching is an art, as well as a science.

  • Mike Goldstein says:

    Hi Cal,

    Thanks for the good questions/observations.

    1. To your first Q, yes. As I mention above, all studies in a single site are inherently limited. (Not just charter v. public. Any School A is diff from School B).

    Hence need for a Phase 1 trial (single site) to be followed by Phase 2 trial (across various types of schools).

    2. Totally agree on dependent variable. Hence need to take promising practices from Phase 1/2 to see if they affect student achievement in Phase 3.

    Which, btw, is what my wife does with cancer research. You try some stuff on cells in a dish, or with mice. Those are proxies for what might work with humans.

    It’s valuable to learn about the proxies, but as you say, that’s not the end goal. Just part of the process.

    3. Just FYI. Phone calls to parents are common in many traditional schools, too. See this Edweek column, for example.

    Also perceived as highly effective. Are they actually effective? I say: Let’s test empirically.

  • Matt Kraft says:

    Great article.

    For those who are interested – the preliminary results of the phone call experiment at MATCH have been slightly updated. I am including the abstract below and a link to the full paper.

    The Effect of Teacher-Family Communication on Student Engagement: Evidence from a Randomized Field Experiment

    In this study, we seek to evaluate the efficacy of teacher communication with parents and students as a means of increasing student engagement. We estimate the causal effect of teacher communication by conducting a randomized field experiment in which 6th and 9th grade students were assigned to receive a daily phone call home and a text/written message during a mandatory summer school program. We find that frequent teacher-family communication immediately increased student engagement as measured by homework completion rates, on-task behavior and class participation. On average, teacher-family communication increased the odds a student completed their homework by 42% and decreased instances in which teachers had to redirect students’ attention to the task at hand by 25%. Class participation rates among 6th grade students increased by 49%, while communication appeared to have a small negative effect on 9th grade students’ willingness to participate. Drawing upon surveys and interviews with participating teachers and students, we identify three primary mechanisms through which communication likely affected engagement: stronger teacher-student relationships, expanded parental involvement, and increased student motivation.

    The entire paper can be downloaded from this site:

  • Frank Scala says:

    I particularly liked the way ‘Teacher moves” were spoken about in this article. It brought me back to the days when I studied the work of Jon Saphier in the “Skillful Teacher.” For anyone not familiar with this work, you might be surprised by the amount of work completed on this subject that has received minimal attention in the education circles.
    Jon and his colleaques primarily work in Massachusetts and many disciples can be found in the schools. However, again, why this body of knowledge about teaching has not been used and distributed, is baffling.

    It may be worth someone’s time, resurrecting this work and introducing it to a broader national audience.

  • Cal says:


    I’m a teacher. (you can google an article written by Jay Mathews about a graduate from a top-ranked ed school circa July 2009; that’s me). I have credentials in math, english, and history, and teach algebra II (no trig) and geometry at a Title I school in the Bay Area. I count homework as a minimal part of the grade, give students the higher of their test average or test/classwork/homework, whichever is higher. Most of my students don’t do homework. I didn’t even assign homework for the last semester, when I taught algebra. I’m seeking a balance between getting kids to think about math for 30 minutes a night or after school, to get more of a memory, without making this behavior a factor in their grade.

    In my experience, phone calls home have zero effect. In public school, they are necessary CYA and little more. I prefer email, and often have good conversations with parents on email about their students. But it’s not a behavior or academic game changer–just a good way to exchange info.

    On the data–I wasn’t criticizing your data per se, as opposed to the entire mindset behind most educational research. We are extremely reluctant (for obvious reasons) to correlate incoming ability with outcomes. But we’re very eager to correlate behavior that we just rilly rilly want to have some sort of impact (calling home, involving parents, and so on).

  • Jenny DeMonte says:

    This is a thoughtful and smart look at the kind of research that may improve the practice of teaching.

  • Tyler S. Thigpen says:

    Yes, yes, yes!

    Michael, thanks for your work in this arena. The coordination you recommend is not just needed but possible! What of the idea to staff at the school level to administer such trials? As school information systems become increasingly sophisticated and cloud-based, I could see our traditional “Registrar” position transforming into a kind of “Data administrator.”

    Thanks, again. Onward!

  • Ben A. Birdsell says:

    You touch on an issue that must be addressed before true improvement can be achieved within our education system, and that’s providing teachers with realtime access to data that effects learning. The technology tools to put meaningful data in the hands of teachers and students, in realtime, are becoming more plentiful each and every day. The technologies behind this potential game changer, e.g., cloud, mobile, and HTML5, are only beginning to make their way into use, let alone education. I can only hope and trust that the research needed to answer the questions about what constitutes ‘best practice’ will immediately follow access to the data. Then and only then will we have the capacity to sustain improvements made within our education system. This leads me to reflect on the wisdom of Ron Edmonds, a researcher best known for identifying the characteristics of effective schools when he said: “We can whenever and wherever we choose successfully teach all children whose schooling is of interest to us. We already know more than we need in order to do this. Whether we do must finally depend on how we feel about the fact that we haven’t so far.” ~ Ron Edmonds, 1983.
    Given technology tools, realtime access to data, ongoing research, and our longstanding traditions embedded in current education policy and culture, we have a very long way to go! As a concluding thought, I share with you two question that continue to shape my career. How do you feel about the fact that we haven’t succeeded at educating all children? and; What are you willing to do about it? Your answer will determine our future.

  • Deborah Chang says:

    As a teacher, I would love to be part of and build this network. Best practice research usually feels like it’s done to the teachers instead of by the teachers. Also, professional development offerings rarely dives into which teacher moves are most effective in which contexts. This teacher research network could shift both paradigms into one where teachers are active participants in improving the knowledge base of their profession and where there is very granular data about which teacher moves are effective in what kind of contexts (by population of students, past behavior of students, student age, learning goal, etc.).

  • Kt says:

    I am absolutely with you on the basic idea here. I wonder about one thing. In the comment update on the results of the study, it described the experimental group getting daily phone calls home as well as a text/written message daily. I’m wondering about the usefulness of this data, then — I too perceive phone calls home as helpful, and would love to know if that’s true. But I’m thinking about phone calls home at a MUCH lower frequency – I have 70 students and can’t imagine calling for even half of them every single day, no matter how effective that might be. Again, this seems to fall into the trap of ignoring teacher time. In the few cases where, for whatever reason, I have a very close and strong relationship with a student’s family and we talk on anywhere approaching a daily basis, yes, I see an impact on the student. But this is not the same thing as “Do teacher phone calls have an impact?” — again, unless the point of the study is to look at drastic interventions for extremely high-need students.

  • Mike Goldstein says:

    KT –

    Thanks for the note. Two thoughts. One is about calls. The other is about experiments.

    In our charter school, teachers have about 80 students. So they typically reach a few per day by phone. It adds up.

    If we had the experimental world I described, we’d never expect “too much” info from a single study. With 3 million teachers, it’s not too much to expect that the researchers do a few per year on each key topic, which aggregate to 30 to 50 in a decade per topic.

    Examples from what you wrote – what is effect of heavy phone calls in Sept (to build relationship) and then “maintenance” calls and texts. Can calls increase essay writing completion, not just behavior? What is return on reaching 10 parents versus, say, same amount of time running a help session after school? Can a teacher’s high-dosage calling “flip” her most disruptive student (ie, a text each morning, a call each night, for a month)?

  • […] ineloquent as I. There are a few tools that I have found reliable (maybe even worthy to be called teacher moves?) in helping students express themselves, even if it is only in entirely confusing, muddled […]

  • […] Goldstein recently asked the simple question: “Why aren’t teachers clamoring for published […]

  • Dr. James Daniel Turner says:

    May I interject that there are studies on teacher moves. They are not done by universities but by private industry. Groups like Time to Teach, Quantum Teaching, Brain Based Teaching etc. Are not government funded and do not publish papers to be reviewed by colleges and universitites. Instead they spend time in classrooms at all levels and work with classroom teachers, quantify data, and present methods that work to teachers that take the time to invest in their training. I have invested a significant sum in my post college training to increase myclassroom moves and create a better learning environment. School systems are not interested in spending money on teacher training that is not funded by the local, state, or federal authorities. These bodies will not look at teaching methodology that does not flow from Universities more the loss for American Education.

  • John M. Clement, PhD says:

    There is research on teacher behavior which has been done in science and in particular in physics. There is now a whole branch of physics called PER (Physics education research) but their findings are generally ignored in schools. There are also training programs that change teacher behaviours. One of them is Modeling at ASU, and it has a huge effect on student learning. Why is Gates ignoring methods and research that actually works when there is good rigorous research that can be applied. Another program which has large benefits is Thinking Science by Shayer, Adey& Yates which also changes teacher behavior and at the same time improves student ability to think. Their work was published in the early 90s but is unknown in the US.

    This work changes basic teacher behaviors while not burdening them with more requirements. The teachers thing differently about how to teach, and get much better results. Why nibble at small changes like 3% improvement in homework, when you can get 30 to 70% gain instead of the usual 10% gain in conventional classes. These figures have bee observed in numersous PER studies.

  • Brenda Royce says:

    I did a study last spring as part of a graduate education course on what factors went into the success of Finland’s schools. While there are many cultural factors that cannot be exported to this country (we’re not Finnish!), one factor I found intriguing was the move to train teachers to be educational researchers. In our country we have the “5th grade dance” divide between researchers and practitioners. In Finland the educational community does significant research with the findings informing practice (they also do not use high stakes testing, but that is another discussion). Japan similarly empowers their teachers as researchers with Lesson Sudy. It seems in these countries the Ed research dance works much more like it should.
    One thing in the article I find vague is the term ‘teacher moves’. What costitutes a teacher move? Can effective teaching be reduced to a set of ‘moves’? The most important research that has impacted my teaching came out of physics education research (modeling instruction), which had identified a coherent set of practices (like the medical cocktail analogy) that makes a measurable difference in student understanding. However, it is different enough from traditional practice that teachers need training and sustained practice to make the shift rather than just reading about it. Cherry-picking ‘moves’ from the overall practice dilutes its effectiveness. For the research effort to make a difference the effective practices (a word I prefer over ‘moves’, since effective teaching is usually a set of synergistic ‘moves’) need to find their way into teacher preparation and on-going professional development.

  • Jane Jackson says:

    Focusing on “teacher moves” misses the central issue in teacher effectiveness. Teaching is not a set of moves. Rather, teaching METHOD is the most important factor in student learning! 

    The sit-and-get method (i.e., teaching is telling and learning is listening) is ineffective. Two examples illustrate this.

    1. Physics education research of David Hestenes at Arizona State University. Modeling Instruction in K-12 science was developed from his research. Student achievement is typically double that of conventional instruction. instead of relying on lectures and textbooks, Modeling Instruction emphasizes active student construction of conceptual and mathematical models in an interactive learning community. Students are engaged with simple scenarios to learn to model the physical world. Models reveal the structure of the science and are sequenced in a coherent story line. They form a foundation for problem-solving and they guide projects. For more information, please contact

    2. The TIMSS 1999 Video Study of Stigler and Hiebert. It supports principles of Modeling Instruction. This was a study of eighth-grade mathematics and science teaching in seven countries. It involved videotaping and analyzing teaching practices in more than 1000 classrooms. They found that high-achieving nations engage students in searching for patterns & relationships, in wrestling with key science & math concepts. Unfortunately, in the U.S. (which scored low on TIMSS), content plays sometimes no role at all; instead, science lessons engage students in a variety of activities; and math focuses on low-level skills: procedures rather than conceptual understanding, in unnecessarily fragmented lessons. See In particular,

    Hestenes, D. (2000). Findings of the Modeling Workshop Project (1994-2000) (from Final Report submitted to the National Science Foundation, Arlington, VA). Note: the effect size was later calculated as 0.91; high!

    Wells, M., Hestenes, D., and Swackhamer, G. (1995). A Modeling Method for High School Physics Instruction, Am. J. Phys. 63, 606-619

    Expert Panel Review (2001): Modeling Instruction in High School Physics. (Office of Educational Research and Improvement. U.S. Department of Education, Washington, DC)


  • Art Zadrozny says:

    Perhaps the author should check out this resource:

  • Tim McKnight says:

    Frank –
    You know that I admire your work and I visit your blog quite often for advice and ideas. As someone “new” new to the modeling resources and the formalized archive of materials I agree with the vast bulk of your post. I do, however, feel that, as I attempt to use discourse methods in my own classroom, I have a strong set of “modeling” tools and ideas … but really need “the moves” that are refered to in several of the paragraphs.

    I think a lot of the “moves” refer to classroom management skill that would help me to better facilitate my modeling classroom. Things like – paragraph(7) …

    …” Should I give a warning or a detention? Do I require this student to attend my afterschool help session, or make it optional? Should I spend 10 minutes grading each five-paragraph essay, 20 minutes, or just not pay attention to time and work on each until it “feels” done? …

    From a subsequent paragraph that seems to be exactly what modelers try for (higher order thinking) …

    …” … students benefit from being asked “higher-order questions.” She begins doing that. Some students, surprised at this new rigor, are frustrated. Some students throw up their hands and give up. Misbehavior ensues. … ”

    I find that since I’ve not fully developed many of “the moves” that I experience a large amount of difficulty setting and maintining the environment that is needed for discourse (among 14 year-olds) to flourish.

    So … I think that you are absolutely correct in stating that the METHOD is what is important … but I feel that “the moves” are an integral part of the entire package.

    Tim McKnight
    (new to modeling instruction but not new to teaching. Although now that I’ve found it I realize it is what I’ve been trying to do on my own for a long time)

  • Not Surprised says:

    “The Lack of Demand for Research is No Surprised”

    An excellent, thought-provoking and very insightful article. It is hard to refute the value of research (your Tom Kane example for example) though I’ve found myself frustrated when educators use it as an excuse to delay making change i.e. “we’ve got to wait for the research on this to come in”. And while I subscribe to the maxim “measure twice and cut once” I also believe there’s got to be a balance i.e. you have to act on faith for any chance to “win” this most important “race”. Dreams, effective interventions and viable innovations deferred can ruin or possibly extinguish young lives.

    Despite the obvious benefit to having and using more reliable research, I would argue you may have missed a “pink elephant” sized explanation for its relative dearth i.e. it doesn’t matter, “practically” speaking, whether I (as a teacher) actual win this race. If winning really mattered good research that identified most effective practice would be worth its weight in gold and in the very highest demand. To reiterate, from a strictly practical perspective, it doesn’t matter when it comes to things like whether I get recognized, rewarded, reprimanded, promoted or awarded my 3% cost of living raise and 3% step increase.

    To ignore something so fundamental and universally applied to help shape the culture and behavior of almost all other industries this size ($600B) is to suggest that all its participants are sufficiently motivated by a reliable moral compass and not by what’s practical or some combination of the two. It would no doubt be a far better world if that were indeed the case. Which is not to disparage our teachers in any way, rather I’d say it’s more accurately about a whole lot of “good people in a bad system”. One designed to give us exactly what we’re getting given the way the world has changed.

    So while research can and should make a significant difference I’ve come to the slightly demoralizing conclusion that it pales in comparison to the need for a committed, collective “will” to educate all our children to their potential, especially those in greatest need. I believe this will (combined with some excellent research) is what’s required to change the system in order to get the kind of results we really need.

  • Mike Monteleone says:

    As a teacher I constantly strive to first) improve students behavior, second) to engage them, and third) to hold them accountable. Because classroom management is most important, I use
    to understand and to respond to student behavior. I look back to my teacher education and remember complaining that what we needed was time on/in the job and then classes to determine better strategies. Most of the instruction we got could have branded “useful” but not effective. I applaud Mr. Goldstein’s article and hope everyone is listening.

  • Ted Purinton says:

    LOVE this article!! A comment on the legal profession (in response to the idea that law does not teach “lawyer moves”):

    This is, like education, an issue that is of serious concern to the legal profession… Law professors are often just as oblivious to the practical side of the profession as education professors, and this poses problem for the practice of law, just as it poses a problem to the practice of teaching (full disclosure: I am an education professor). Here’s a good NYTimes recent article on the topic:

  • Kevin Guichard says:

    Great article! I believe you have hit an essential issue for improving education.

    Here are a few things I noticed during my nine years of experience in the teaching profession.

    Teacher Credentialing programs talk about the importance of educational research but do not train teachers how to correctly evaluate or read educational research. How can a teacher discern the difference between well executed meaningful research from poorly executed and inconclusive research if they do not have the correct training on research practices? I would argue you would not need a phd only training to become a proficient research reader.

    Finding good research articles can be expensive. Let’s say you are a teacher who has a question about a teaching methodology. Then to do a useful search you would have to filter through abstracts to find research on your topic. Once you have your list of articles you would most likely have to purchase the articles. Most research articles are published works owned by a research journal company. So either the district would have to fund the literature search or the teacher would pay out of pocket to obtain those articles. A good literature review would require multiple articles and not all the articles would necessarily be considered good research. This expense does not even account for the time expense it would cost the teacher either.

    I have found many salesman who try to sell their next educational programs misuse educational research and those educational professionals listening to the sales pitch do not have enough research expertise to make clear informed decisions about the “new educational program” they are trying to buy.

    Many researchers do not do Quantitative educational research because it is more difficult and time consuming to do. Many educational researchers want to publish quantity not quality. It is easier to publish a qualitative analysis of a small group then to do a paper on an experiment that accounts for causation.

    Based on my observations I would recommend that teacher credentialing programs include some sort of research reading training at the very least. If we are to include the phases as you mention in the article the perhaps teachers should become more proficient researchers.

    I would also suggest that we need to find a way for teachers to have free access to research articles.

  • Comment on this Article

    Name ()


    Sponsored Results

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform