Eighth-Grade Students Learn More Through Direct Instruction

Education Next Issue Cover


92 Comments | Print | PDF |

Summer 2011 / Vol. 11, No. 3

Should teachers stand in front of the class and present the material to be learned? Or should learning be more dynamic, with students solving problems, either on their own or under the teacher’s guidance? Which approach yields the most student learning?

Opinion on this question is deeply divided. “The sage on the stage” versus “the guide on the side” is how the debate is often framed. Proponents of the former ruled the education roost throughout the 19th century, but in the 20th century a child-centered doctrine, developed by John Dewey in the gardens surrounding the University of Chicago’s Laboratory School, then refined at Columbia University’s Teachers College, gained the high ground, as “inquiry-based” and “problem-solving” became the pedagogies of choice, certainly as propounded by education-school professors. In recent years, the earlier view has staged something of a comeback, as KIPP and other “No Excuses” charter schools have insisted on devoting hours of class time to direct instruction, even to drill and memorization.

As an instructor myself, I’ve had trouble making up my mind. I can cover a lot of ground in classes where lectures consume about two-thirds of the time. But those classes get less enthusiastic student evaluations than some smaller classes where students are encouraged to solve problems through discussion. I, too, like those problem-solving classes. They require less preparation and are easier to teach.

So I can easily understand why progressive pedagogy has proven popular. It’s more enjoyable for all concerned, even if sometimes you worry that you are not teaching very much.

The question of which approach works best for student learning has seldom been a topic for careful empirical inquiry. So when Guido Schwerdt and Amelie Wuppermann of the University of Munich figured out a way to test empirically the relative value of the two teaching styles (see “Sage on the Stage,” research), it is worth trumpeting the findings. These analysts took advantage of the fact that the 2003 Trends in International Mathematics and Science Survey (TIMSS) not only tested a nationally representative sample of U.S. 8th graders in math and science, but also asked their teachers what percentage of class time was taken up by students “listening to lecture-style presentations” rather than either “working on problems with the teacher’s guidance” or “working on problems without guidance.” Teachers reported that they spent twice as much time on problem-solving activities as on direct instruction. In other words, U.S. middle-school teachers have drunk deep from the progressive pedagogical well.

To see whether this tilt toward the problem-solving approach helps middle schoolers learn, Schwerdt and Wuppermann identified those 8th graders who had the same classmates in both math and science, but different teachers. Then they estimated the impact on student learning of class time allocated to direct instruction versus problem solving. Under which circumstance did U. S. middle-school students learn more?

Direct instruction won. Students learned 3.6 percent of a standard deviation more if the teacher spent 10 percent more time on direct instruction. That’s one to two months of extra learning during the course of the year.

The students who benefited most from direct instruction were those who were already higher-performing at the beginning of the year. But even initial low performers learned more when direct instruction consumed more class time. Sadly, U.S. middle-school pedagogy is weighted heavily toward problem-solving.

— Paul E. Peterson

Comment on this article
  • Ihor Charischak says:

    What this study has shown is that teaching to the tests produces better test scores. No one should question that. But do students better understand the material as a result? That’s the endless debate that’s been going on for years. Would I send my kid to a high powered “no excuse” school? It depends. If I want to brag to my friends what a great student I have in math I might be tempted. But I would prefer a more liberal education where the Royal Road to Calculus is not just one way (or the highway) approach as the KIPP schools are prone to brag about. But also include detours into some fascinating investigations where students apply their math skills. In fact, they may be so attracted by the detour that they may never get back on the Royal Road and become doctors or lawyers where knowing the intricacies of Calculus may not be required for entry.

  • K says:

    There’s a big difference between lecturing in college and lecturing in middle school. I have to assume that this study was not looking at teachers who literally JUST talked to the 8th graders for a full 50 minute period.

    I think part of the problem is setting up lecture as the opposite of problem-solving, as if they are in competition. I have been a lot of math and science classes, as a student, as a teacher, and as an observer. I have seen well-planned lectures and well-planned lab experiments and all sorts of other things. If you lecture for 50 minutes a day at a room full of 13 year olds, literally just lecture with no other instructional strategy used, I don’t think you’ll see positive results. And I doubt that’s REALLY what this study measured.

    Most actual teachers I know, especially in middle school, use a variety of methods. Maybe you “lecture” (demo a problem on the board, talk through steps, have students take notes, whatever) for 5-10 minutes, then have students try a few similar problems, then come back and go over the samples to correct misconceptions, then have students try a bunch more. Maybe another day the skill lends itself well to an activity that sets students up to wonder the very thing that you’re then going to explain via a short “lecture-style presentation”. I just have seen very few classes at a middle-school level where even an entire lesson consists of JUST one or the other, nothing else.

    Lecture and problem-solving are not the ONLY two methods for teaching, nor are they competitors where one can “win” over the other.

  • Barry Garelick says:

    Let me see if I understand Ihor’s comment. If you teach students using direct instruction, you’re automatically teaching to the test. What test he does not specify. The middle school students in the US which worked on problem solving activities did more poorly than students in countries in which direct instruction was used. So Ihor concludes that the other countries were learning how to solve questions on the TIMSS exam. The fact that the questions on the TIMSS exam covered the content that should be mastered in math is of no concern to him. Also the fact that US student didn’t do as well is not evidence that they do not know the material, and in fact they may have a deep understanding even if they can’t solve many problems. He seems to imply that learning math via direct instruction some excludes students from obtaining a liberal education and keeps them from exploring fields like law or medicine.

    Something like that anyway.

  • Catherine says:

    Assuming I’m reading correctly, the study uses TIMSS as its measure of achievement.

    No one teaches to TIMSS.

    The teachers in this study are teaching math and science.

    Then, when the students take a math / science test the teacher has never seen, they fare better than students who spent class time solving problems.

  • Catherine says:

    I believe that pre-med students take calculus.

  • Ze'ev Wurman says:

    The one of the weakness of this study seems to be the lumping of working on problems both with and without teacher guidance into a single “problem solving” category. A reasonable question could be how does lecturing stacks against only one or only the other. Other than that, very nice!

  • Katharine Beals says:

    Mr Charischak’s comment baffles me.

    What’s wrong with teaching to the test if the test is a good one that measures understanding? If the test is good enough, then isn’t teaching to it equivalent to teaching what you should be teaching anyway?

    And why can’t mathematics be part of a liberal arts education?

  • Laurie Rogers says:

    Mr Charischak’s comment baffles me, too. He would be “tempted” to send his kid to a “high-powered no-excuse school” if he “wanted to brag” to his friends about what a great student he has in math?

    I find that to be fairly consistent about people who advocate for reform math and excessive constructivism. Their conversations about math seem to center on what THEY want, on THEIR view of the world, and not on how well their preferred approach works for the students.

    Mr. Charischak said he wants students to be able to “detour” into “investigations” where they apply their math skills. The children would have to learn those math skills in order to apply them, and that’s where reform and excessive constructivism fail completely.

    You don’t have to take my word for it. Just look around you, at how American public schools have generally failed to teach sufficient math to an entire generation of children.

    Laurie H. Rogers
    Author of “Betrayed: How the Education Establishment Has Betrayed America and What You Can Do about it”
    and “Betrayed” – a blog on education

  • John Clement says:

    3.6% of an STD is an effect size of 0.036 which is miniscule. So how do they get a change of learning of several months. With an effect size this small the effect would be days worth not months.

    The effect size from PER, physics education research, is enormous compared to this. Indeed it is usually greater than 1.0 and may approach 3.0. Interactive engagement works much better, but only a small fraction of the teachers in the study use it.

    So they have labored mightily and produced a mouse. Actually problem solving is often not really dynamic, so they are looking in the wrong part of the educationa space.

  • Todd Pedlar says:

    With all due respect to the authors of the original research, and to the author of this article…a 3.6% of a standard deviation difference is so far below the level of statistical significance as to be a completely meaningless difference. As a result, absolutely NOTHING has been proven through the research done except that the authors don’t seem to understand the principles of statistical inference.

  • […] Eighth-Grade Students Learn More Through Direct Instruction […]

  • Sandrine says:

    In math, one must secure the building blocks of prerequisite skills in order to advance thru the skills… “Exploratory” math without consistent interaction by the teacher, will leave a lot of students adrift and missing out on mastery of preceding skills, which are needed to grasp subsequent learning. Thus, they get left further and further behind.

    To make it seem like direct instruction occurs as one long uninterrupted talk (teacher) and look/listen (students) fails to pinpoint the correct usage of direct instruction and why, as such, it is extremely necessary.

    Students, no matter how cooperatively and diligently they work without teacher guidance, can never fill in the hole that teachers can, as skilled, committed knowledge-givers and managers of the learning environment. Teachers, even in managing group or peer inquiry, need to constantly identify and pull out for correction those who are on the wrong track… and then, voila, we have direct instruction.

    Since there is limited time (and attention spans) teachers need to use time most effectively, ie, net the most understanding in the shortest time possible. That necessitates first doing whole class instruction or at least to large groups within the class, if their ability has a wide enough gap to necessitate this, and then to cull thru the outcomes and give assistance where its needed.

    Peer to peer can work here, groups can work here. The same can work in some totally exploratory next step inquiry, and, of course, in activities designed to build reinforcement and generalizing of the skills already attacked. However, the teacher is always on call to intervene, and that intervention, again, is direct instruction…

    Direct instruction is essential to the classroom. Mix, do NOT nix!

  • Michael Paul Goldenberg says:

    Fascinating how true believers in direct instruction who are ostensibly “math” people seem to have missed the obviously trivial effect size and the meaninglessness of a 3.6% of a standard deviation difference. Well, not really fascinating: merely predictable.

    The study in question sounds like so many I’ve read that set out to prove that student-centered learning doesn’t work: you pick tests in advance that are as far from looking at what student-centered learning tries to promote, then claim that it’s an obvious failure when . . . it doesn’t outperform instructional methods that are specifically well suited to the sorts of things your tests are designed to reward.

    I think in gambling this is called a rigged game. Why any intelligent person would give credence to heavy-handed “science” like this can only be attributed to wishful thinking.

    That point aside, I’m astounded by the author’s claim that it takes less preparation to teach problem-solving based classes than it does to lecture. Well, perhaps I shouldn’t be. No doubt from his perspective, that’s true. But that it is speaks volumes about his lack of understanding of what goes into effective student-centered instruction. He’s hardly alone in that regard. Paradigm shifts are rarely easy, and those who think that the heavy lifting in education is “getting one’s lecture down pat” aren’t readily going to understand how much more difficult it is to do the sort of teaching that one sees from the likes of Deborah Ball, Magdalene Lampert, Robert & Ellen Kaplan, or other masters of such instruction. If it were easy, anyone could do it. That it is not, that it’s vastly more challenging, is why so many teachers who think they’d like to do it either fail miserably or quickly retreat to the “tried and true” mode of teacher-centered pedagogy. Then, when students don’t do well, the teacher can sigh, “I gave a brilliant lecture, but those lazy students just didn’t learn.” How comforting that must be, I’m sure.

  • Jerrid Kruse says:

    Thank you John Clement for pointing out the serious flaw in thinking 3.6 of a standard deviation is so small. Another serious flaw with this interpretation and perhaps the TIMSS study in general is that teachers “report” how much time is direct instruction. Different teachers will have different meanings of direct instruction AND teachers often don’t realize what percent of their teaching is direct instruction, so we cannot reliably believe the percentage of direct instruction reported.

    Additionally, do we really want kids who can absorb lots of info, but not solve problems? Of course, the two are not mutually exclusive, but given the ubiquity of info in today’s world, I’ll take a problem solver over a test taker any day.

  • Bruce Smith says:

    Pace Harvard, but I’m going to suggest readers visit Andrew Gelman’s blog (he’s a prof. of statistics from Columbia, but, hey, it’s still Ivy!) and look at any of a number of his explorations of the weaknesses of using statistical significance to interpret small differences. This seems to be yet another example. Here are two references to get you started that deal with his reflections on stories recently receiving major media coverage: http://prefrontal.org/files/posters/Bennett-Salmon-2009.jpg
    And for the second one, Google this title (the link is many lines long): “Of Beauty, Sex, and Power”
    Then I think you’ll read these kinds of reports more skeptically when you encounter them.

  • […] A clueless Harvard professor talks about how direct instruction works better, and how that’s too bad because “problem-solving classes… require less preparation and are easier to teach.” Happily, many comments on his piece take him on, as does Walt Gardner at Education Week. How can anybody even think about comparing their experiences in teaching a class in Harvard to teaching in a K-12 public school, much less write about it publicly? […]

  • Corey, NBCT says:

    Part of the problem in education is that we are relying on a “this way” or “that way” approach to this debate. It’s not a matter of saying lecture works, or exploratory works. They both do. There is nothing wrong with merging the two. Lecture doesn’t have to be 50 mins long, and exploratory doesn’t have to be 50 mins long. A good teacher can combine the two formats and use them together to further their kids understanding.

    Some days, lessons, or parts of lessons will be exploratory, some will be lecture.

  • Marc says:

    This is a horrible article that has no merit. Providing formative assessment activities take a great deal of time and effort on the teacher, particularly if you are providing choice. Peterson needs to read CASL by Stiggens.

  • Karen Leigh says:

    As somebody mentioned in an earlier post, it depends on the “lecturing” technique. If you’re talking about just standing at a podium and talking, I can’t see that ever being the best way to teach kids of any age. BUT if the lecture is rich with passion and movement, and includes great questioning technique along the way, then it is one of the most effective ways to guide the students’ into the discovery of new information!

  • Tom says:

    I recently read an article that supports the fact that direct instruction correlates with better test scores when you look at the effect over a short period of time. Yet, when longitudinal studies are done it suggest the impact of direct instruction is less effective in trul educating our students. I, for one, am a guide on the side as much as possible, and I believe that this approach is better preparing my students for the world they will be entering at the end of next month.

  • Jessica Piper says:

    It is super patronizing to say that teaching inquiry-based, project learning takes less effort to teach. Really? Um, try it and then get back to me.

    I’ve done both, and it takes WAY more effort to teach through inquiry than to stand and deliver…give me a break.


  • Sally Mascia says:

    Comparing apples to oranges. Math and science are accessible in different ways through different instructional means. To compare the test scores in two different content areas taught by teachers with two different teaching styles creates so many variables that the study is uninterpretable. Another case of poor research methods.

  • sbfpsopranoNBCT says:

    Amen. Wonderful comments! A good teacher is able to find the correct method of delivery for the topic at hand. Some topics require lectures, and some require the discovery method. It requires much more planning to allow the students to be active and engaged than to stand up and deliver a lecture. This is another example of how people who have never been a teacher have absolutely no idea of what is required to do it well.

  • Heather says:

    If all we want is kids to pass tests, then direct instruction is the way to go.

    If we want kids to be able to think, solve problems, and be creative, direct instruction is not the way to go.

    And any classes that I’ve had kids figuring things out have required MORE preparation than direct instruction because I have less experience and skill in successfully setting that stage.

  • wardan says:

    When will educators stop pitting one solution against another as a way to find the only “right” way to teach? Most of us who are in the classroom know that many methods must be used in order to be effective, and that includes lecture, problem solving, and everything else. Problem is, standardized tests don’t test everything else.

  • Richard Bramer says:

    The research quoted is almost ludicrous. I was quite interested in the result at first, because I believe teachers should have a tool box with a variety of tools, ready to go at all times.

    This research, however, groups ‘students working on problems WITH teacher guidance’ with ‘students working on problems WITHOUT teacher guidance’ together, as if, in a problem-solving milieu, the teacher’s guidance was irrelevant.

    This is, of course, ridiculous. Lecturing, or direct instruction, will surely win out over any method which includes students working solo. If students working solo were efficient, schools would be unnecessary.

    Additionally – prep time for discovery learning is not less than for lecture, in my experience. Lecturers prep once and work those notes for decades (at least potentially). Activities need props, scripting and practice. Not harder, necessarily, but not easier.

    I was hoping for more from this article.

  • […] E. Peterson points to new research finding that 8th graders who received more direct instruction scored higher on an international math and science test than those whose teachers’ predominant…. He notes that this is the direction that KIPP and other charter schools have already been moving, […]

  • Maria Cianciolo says:

    In Asian cultures like mine, direct-instruction or lecturing is prevalent. It works because of the ‘power-distance’ between teacher and student. As an adult learner in the American system, I always thought, “Just give it to me and let’s not waste time.” Needless to say, across the board, Asians fair better in literacy, Math and Science. But what works for us Asians, may not necessary work here.

  • Barry Garelick says:

    “I recently read an article that supports the fact that direct instruction correlates with better test scores when you look at the effect over a short period of time. ”

    What article was that? Could you provide a link or the name of the article and journal it was in?

  • John Thompson says:

    Has anyone considered the idea that problem solving requires stronger content understanding and more recognition of student difficulties by the teachers? Did they control for the skill and knowledge of teachers in this study?

  • Allison says:

    Any teacher who says that Problem-based learning takes less time and effort that lecturing is not doing justice to PBL. Finding a great way to structure a “messy” problem that leads students to discover what you want them to find is not really easy. I use both lecture and PBL in my social studies classes, and I can assure you, I am far more weary after a PBL unit than a lecture unit!

  • gary B AU says:

    are we talking about guided instruction, which is interactive and building on student prior knowledge?

    sounds like lock step constructivist classes..something Dewey was advocating!!!

    the advantage of problem solving is the student is encouraged to frame then solve the variety of solutions in their own way and in their own time.

    guided instruction is an impatient method of forcing a timeline on all the learners.

    the more able of course are not so worried by being hurried along…the less able well they soon give it up as they do not understand before they must move on

    the distance from the test is the determining factor here

    if ‘only’ a few weeks, then all students will carry enough to score well

    if a semester based examination, with high stakes testing….no prizes for guessing the less capable students will self select out and not expose themselves to the unrewarding experience of being told they cannot work fast enough.

    in the end it’s all about reward for effort

    maths traditionally only rewards those who are fast and accurate, anything else is deemed failure…about two thirds of the population!!

    science understanding maybe a better measure of learning

    though the TMSS chosen classes are not always random, several countries test only their high achieving specialist maths/science schools…hardly an apples with apples comparison.
    Australia and the US do not screen their schools before the TMSS….only attack them after when the students do not score as highly

  • Greg Gero says:

    Jim Stigler, author of several articles based on the TIMSS (not TIMMS as written by Peterson) research, points out that the Japanese teachers actually required students to “struggle” with a math problem (i.e., problem solve). U.S. teachers, on the other hand, spent most of the class time rehearsing math procedures without making connections to underlying concepts.

    Stigler’s findings are based on hundreds of hours of video footage rather than teachers’ self-reports. As he pointed out in one of his articles, many of the U.S. teachers’ descriptions of their teaching do not match the characteristics observed in the video.

  • HPortnoy says:

    I’m amazed that we are forever doing “either/or.” Those of us who have successfully used inquiry and problem based instruction have probably always used some direct instruction and lecturing. It’s both/and.

  • Joyce Griggs says:

    Students must process information before they can retain it. I challenge the teachers using primarily lecture to have their students repeat back to them what they just learned by putting the new content into another format, such as a graphic organizer or a detailed summary, and be able to explain it. A student who has framed the learning for him/herself by asking questions, engaging in hands-on activities, and analyzing the information will have a deeper understanding than the student who sits passively taking notes. Students are not robots, they have different learning styles and need a combination of presentation methods to successfully grasp a concept, determine its relevance, make connections, and place the information in long term memory. Learners must make meaning for themselves. A teacher facilitates that learning using whatever method works best. Adding ten minutes of teacher talk is not necessarily going to accomplish understanding. Adding ten minutes of student processing the information will.

  • Jack Drury says:

    I feel sorry for his students if he thinks problem-based learning takes less preparation and is easier to teach. What does he do regarding formative assessment? To do a good job at what I prefer to call the SPEC (student-centered, problem-based, experiential, & collaborative) approach takes a tremendous amount of preparation and follow up.

    For all approaches to teaching, some do it well and some don’t.

  • BubbaTeach says:

    There’s a difference between ‘covering’ the curriculum and actually teaching it to mastery FOR each student…one viewpoint is instructor driven (top-down) and the other is student drive (student-up)…when I read this I hear a confusion between the two. I will say from experience that, as a result, we’ve created students (talking 4th and 5th grade) who simply don’t know how to struggle and overcome as one commenter stated earlier.

  • AHalter says:

    I do love the dialog that ensues when an issue like this is cracked open! Great food for thought on both sides of the fence. I do however take issue with Mr. Peterson’s statement that inquiry and project based lessons “require less preparation and are easier to teach.” As an instructional coach, I find the opposite to be true. Effective problem solving and inquiry based lessons are a challenge to design and implement, and require much more calculated assessments (formative and summative) to measure them. When done effectively, they can help push students to rigorous levels of understanding and relevant application that a lecture and a test simply cannot attain.

  • Linda Colloran says:

    I don’t think this study clarifies anything. Read the study–the 2003 data analysing the scores of 10 students per instructor–how does that lead to good results? As education becomes data driven, it is critical that we as teachers learn to evaluate the data used and correlations derived from the data. Too many of us don’t know how to do studies that truly measure what we are wanting to know; the summary and discussion give this report credibiity it doesn’t deserve.

  • Bill McDonald says:

    I, too, agree that lecture (in all its forms) and inquiry-based problem solving are not necessarily at odds in the classroom. Good for some things, not for others.

    I’m intrigued by the idea that the amount of lecture versus problem solving was self-reported. I’ve found that it’s difficult to accurately self-evaluate the actual amount of time spent on each unless it’s done empirically. And self reporting could be biased if the popular trend favors one over the other.

  • Prof. emeritus Conrad P Pritscher says:

    When remember lower level concepts is the goal direct instruction may be better. A prominent psychologist several years ago predicted that by 2020, our fund of knowledge would be doubling every 17 days.

    Open inquiry is learned through openly inquiring. Open inquiry is often not the primary goal of instruction when someone lectures. Freeing students to study what they find to be remarkable, interesting, and important is the key. Carl Rogers freedom to learn it elaborates on this as does the book Re-opening Einstein Thought: About What Can’t Be Learned from Textbooks (or lectures).

  • James A. Pietrovito says:

    Since when does considering one class of 8th graders for anything become a “study”?! There is no generalizable information to be taken from this activity

  • Paul Hillyer says:

    Interesting. I thought the countries that scored highest on TIMSS actually had the most problem-based approaches, such as Synapore. Now you are telling me all these former studies are wrong? Researchers on one side or the other certainly need to go back to math class themselves. That is quite a mathematical descrepency.

  • Susie says:

    Good teachers use a variety of teaching methods depending on the content. And besides that, direct teaching is not just lecture/recitation. It has to do with making the goals clear, giving plenty of time for practice, etc. It’s not an either/or proposition – it’s providing the right instruction for what is needed. Good grief.

  • Darren Mead says:

    This article is the educational equivalence of Creationism. Smoke screens, selective and has pseduo measures.

    Get the facts-read Hatties meta study and do as he says- education is more than test scores. We have a responsibility to also prepare students for the real world.

  • James Wohlgemuth says:

    As a middle school social studies teacher, I have been encouraged and more than willing to set up any number of problem solving, self-teaching, cooperative learning, jigsaws, and other child centered problem solving activities for my students. I have also realized I can not do these always and there must be times of direct instruction. My dilemma has been when I give formal or informal assessments. No matter how much scaffolding, coaching, guidelines and requirements I set for the child centered activities the students do better after direct instruction. Is this disappointing? You bet.

    So I was interested to see what these results were hoping that I could see that it was a problem with me. I was hoping to learn that if I only did one more thing to add to the station or just a little more guidance to help the jigsaw, that the kids would take these more exciting and stimulating activities and turn them into learning.

    So the problem remains how to we make learning fun and stimulating and still achieve the learning we want?

    We must realize at this point that neither students, parents, or teachers will be willing to abandon child centered learning for direct instruction.

  • Jaye says:

    I, too, like those problem-solving classes. They require less preparation and are easier to teach.

    No. If “problem-solving classes,” as Peterson labels them, are well-facilitated, they require at least ten times the preparation. However, the understandings students gain will be life-long, not test-long. I find Frank Smith’s work in this area very revealing, and it makes sense.

  • Ken Jensen says:

    The issue is not so much whether we should or should not use direct instruction, but rather why it’s being used and does it create dependence or independence in the classroom. If an extra 10 minutes of direct instruction helps to unlock the big idea so the students now have something to struggle with then this instructional move has created independence. On the other hand, if it gives students the procedures and answers, then it has removed the struggle, stripped the rigor, and kept the students from learning. Learning requires struggle.

    The problem is that direct instruction usually occurs when a teacher does not believe that a student can do the work without their direct intervention. The belief that students can’t do it without me causes students to become dependent on the teacher, and too many times the teacher becomes the only mathematician in the room. On the other hand, if the direct instruction sets the stage for the struggle where the teacher then takes on a facilitative role of encouraging the effort and sense making on the part of the student, then the students become independent in their thinking and they take on the role of mathematicians in the classroom.

    Belief in students is paramount. The key question is, “does the teacher believe the students can do this math?” Direct instruction usually means the answer is no; facilitative instruction usually means the answer is yes.

  • Mike Anderson says:

    Need it be one or the other? What about ten minutes of direct instruction, to teach concepts and content, followed by a period for exploration, integration, and discovery? A final five to ten minute wrap-up to help pull things together, and you’ve got yourself a well-balanced lesson that will allow students multiple ways in which to learn both content and good process skills.

  • Silvia Carranza says:

    Those of us who have been around for a while remember the value of Madeline Hunter’s 7-step lesson plan which provided for 1) motviating hook, 2) direct instruction, 3) guided practice, 4) independent practice, 5) evaluation, etc. We need ot keep in mind that teaching is not about using only one approach to the exclusivity of any others. The best approach is an eclectic one that allows you to differentiate instruction to meet the needs of the students you teach.

  • Amanda says:

    Admittedly, I am not overly familiar with the TIMSS test however, it sounds a lot like teaching to a type of standardized test. While I don’t agree that only
    problem-solving pedagogy whether guided or not by an instructor is the best way to teach I believe, it takes a variety of methods because you will never get all students
    to learn from a single pedagogy….some students just do not learn well from an instructor given real-world problems and working through them, whether they have help or not.
    It is my understanding and experience that one method is simply not realistic or beneficial to students as a whole, it is also dependent, or at least influential, based upon the content you would be teaching.

    This is especially true in Agriculture Education, while I have lectured and had students do their own research on gardening and the importance of starting some plants indoors then transplanting them outside to the garden when the weather and soil temperatures are fit for the plant, they still have a “deer in the headlights” look. I firmly believe that this is one of the times where experience and problem solving as they complete the garden is the only true way for them to completely understand and be able to comprehend the importance of the information which I have lectured and they have researched.

    This is also true for say family and consumer science classes when looking at preparing meals. You can lecture and give directions as many times as you want but there are students who simply can’t put together what each of those steps would look like or what effect they truly have on a finished product until they actually do it.

    I know some of you and say that these are all career and technical courses and so it is as if I am comparing apples to oranges. But career and technical courses benefit many students, especially students with special needs. There are now agriculture education instructors who can teach classes for science or math credit if they have the endorsement and can prove their curriculum for the chosen course covers material that covers specific standards or benchmarks for the core area at hand, at least in some states.

    If we continue to teach students to simply know the processes or formulas for math and science but they do not have a real foundational UNDERSTANDING of the concepts and materials then they will not be able to apply them to real world problems when they go to enter a career. We need all types of pedagogy to truly make our students successful not only in their educational careers but also in the lives ahead of them.

  • Mary Feldstein says:

    Since when is lecture the same as Direct Instruction???
    Educators know the difference between the two and DI is a well researched effective teaching model. And kids actually learn – not just pass the test. Though if we assume that what’s on the test is worth knowing then I guess it’s a good idea the kids can pass the test too.

  • Meg says:

    Another methodological flaw in this study is that what teachers reported doing in terms of having students do “problem-solving” could be interpreted in multiple different ways by different teachers completing the survey. While some of those might be using a problem-based inquiry approach that emphasizes students’ collaborative and active engagement in mathematical thinking, surely many others would consider having students do mathematical “drill-and-kill” problems to be “problem-solving” (I have certainly heard many teachers refer to it as student problem-solving when they are giving independent computation practice, for example!) Thus, one is mixing apples and oranges but trying to make a claim just about apples. This is a remarkably poorly designed study, even aside from the statistical lack of relevance.

  • Belinda Stanley says:

    As a biology teacher for the past twenty-seven years, I can testify to the fact that lecturing is a breeze compared to planning, setting up, and guiding students through meaningful lab activities. A combination of both is absolutely necessary – that should be obvious to anyone who has actually taught science. My eighth grade earth science teacher was a woman micropaleontologist who graduated from Cornell in the 1950s. She lectured, we took notes. She taught us to identify minerals and rocks, draw weather station symbols, and read USGS maps. Her secret: She was passionate about her subject and that made me want to learn.

  • David Coffey says:

    Too many thoughts to comment about here.

    Blogged about it instead.


  • Kim Marxhausen says:

    So direct instruction (as you define and measure it) as opposed to inquiry teaching (again, by your definition and measurement) is a tiny bit better at getting good test performances? Two problems: 1. As a teacher you can’t depend on a tiny bit better, you need 100% effectiveness and 2. Once kids leave school how often will they be taking tests vs. problem solving in a social context? Do we want performance or productiveness?

  • Bob Calder says:

    Previous research on project-based pedagogy compared across different systems demonstrated that the US middle school teachers misused project lessons by not prepping and reflecting compared to the Netherlands, Singapore, and Australia. US teachers were typically not degreed in science nearly as often as those other countries. I therefore doubt the validity of any conclusion based on what pedagogy was used.

  • Trent says:

    I feel as though we are taking the principle of Bush-era foreign policy into the pedagogical debate – you are either with us or against us…..

  • Terry Shaw says:

    A couple of obvious problems here. Self report data, as mentioned earlier, is notoriously unreliable. It is based on each individual’s perception of the meaning of the question.
    A more serious problem is that the hundreds of videos taken of classroom instruction in eighth grade math classes in the US, Germany, and Japan as part of the TIMMS study clearly showed that the dominant instructional strategy in the US and Germany is direct instruction followed by drill. The dominant instructional method in Japan is very inquiry-oriented, student-centered with the teacher presenting a problem, students working in groups to try different strategies to solve the problem. The teachers in Japan are definitely “guides on the side.” Now let me try to remember, how did the TIMMS results of these three countries compare for 8th grade math? (In case you don’t know, Japan was near the top, US and Germany were in the middle of the pack.)

  • Christalina Donovan says:

    O contraire, Pierre, I take issue with the author’s remark that inquiry-based learning lessons do “require less preparation and are easier to teach.” True facilitation, supporting students in their problem-solving and analysis, is MUCH harder than just direct lecturing and letting them copy. I teach Interactive Math Program IMP high school math from Key Curriculum Press. On bad days, I give up and lecture!

    As I recall the TIMSS results, higher scores were from countries who explored more abstract math theory, rather than just algorithms for solving math problems.

  • […] Here’s the initial article by Paul Petersen, which basically summarizes the results of a research study in which American students were studied for their learning responses to lecture-style (“sage on the stage”) versus problem-based (“guide on the side”) teaching practices:  “Eighth-Grade Students Learn More Through Direct Instruction” […]

  • Daniel R. Venables says:

    “I, too, like those problem-solving classes. They require less preparation and are easier to teach.”

    Are you kidding me? A problem-based, student-interactive lesson takes more prep. It’s way easier to yap from notes (or not) you’ve used for a zillion years. The execution may be less work in the sense that the students are “doing” rather than the teacher, but the prep is considerably more. Whoever doesn’t think so, doesn’t really do them.

    And 3.6% of a standard deviation? I have a masters in math and I don’t know what that means, statistically. .036 of a standard deviation? Whose? The lecture group or the problem-solving group? Surely, their standard deviations aren’t the same. Talk about useless statistics….

    What was the p-value for the difference in their means? This study is riddled with gobbledy-gook.

  • Barry Garelick says:

    Terry Shaw said: “A more serious problem is that the hundreds of videos taken of classroom instruction in eighth grade math classes in the US, Germany, and Japan as part of the TIMMS study clearly showed that the dominant instructional strategy in the US and Germany is direct instruction followed by drill. The dominant instructional method in Japan is very inquiry-oriented, student-centered with the teacher presenting a problem, students working in groups to try different strategies to solve the problem. The teachers in Japan are definitely “guides on the side.” ”

    Please see this paper which analyzes the videos referenced above: http://www.cs.nyu.edu/faculty/siegel/ST11.pdf

    The videos show that the instructors provided instruction prior to posing the problem, and that in the discussion period that followed, no students succeeded in solving the problem.

    See also my commentary on Jay Greene’s blog at http://jaypgreene.com/2011/01/23/the-educationist-view-of-math-education/

  • Jim Nolan says:

    I am astounded that this result could be so misinterpreted. 3.6% of one standard deviation equals 3 months of learning.

    If 3.6% of a standard deviation really equals 3 months of learning, then moving one standard deviation would be equal to 75 months of learning or 6 1/4 years of learning.

    You have to be kidding!

  • jamie says:

    Three words that are prevalent in today’s education: “Assessment drives instruction.” Solely lecturing to middle grade students for a whole period is a bad idea; however, mix it up with other activities isn’t so bad. Doing all authentic activities is also a bad notion especially when you have a plethora of other objectives to address within a certain period of time. I try to do a mixture. Knowing that my students will be seeing a teacher next year that is THE SAGE on the Stage, I try to prepare them for that with some entertaining lectures and with some problem solving/authentic activities interjected.

  • Danaher M. Dempsey, Jr. says:

    Let us examine this controversy using a larger analysis…

    Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement

    [effect sizes from “Visible Learning” by Hattie : the hinge effect value of 0.40 or greater indicates an intervention is likely to bring success]

    a. Inquiry based teaching (0.31)
    b. Problem based learning (0.15)
    c. Differentiated Instruction (no empirical evidence)

    d. Project Follow Through’s recommendation for Direct Instruction (0.59).
    e. Problem Solving teaching (0.61),
    f. Mastery Learning (0.58), and
    g. Worked Examples (0.57).

    Amazon reviewers of Hattie’s book wrote:

    Do you know that certain types of structured active learning with strong teacher control work miles better than discovery learning or problem-based learning?

    G W PETTY “Liz Singh” (Hay on Wye UK) –
    Few books on education persuade us to see more truthfully and anew, or show us the way to do better for our students. This one does both.

    Hattie has spent decades collecting data and conclusions from over 800 authoritative summaries of research, to compute average `effect sizes’ which measure the impact of a host of influences on student/pupil attainment.

    This book is the most objective, wide ranging and authoritative summary of education research we are likely to see this decade. There is little comfort here for governments, or for the educational establishment, but there is illumination for both.

    To ignore this book is to remain willfully blind to what really matters in education.

    This is a detailed contribution to the educator’s library, on the important theme- what affects educational outcomes for our students. Given the size and detail, it is best suited to the educated professional, but is also accessible enough for the educated reader – though having little opportunity to affect any change may prove frustrating.

    There’s a lot already known about what works and what doesn’t in education. This book summarizes nearly all of it. Before you start or commission another study, look here.

    John Hattie leads by saying that nearly everything works. I suppose that’s because humans learn naturally. The question is what works well. Hattie shows that simply being in a classroom for a year has an effect size of 0.4 so the important innovations must have an effect size greater than that.

    One caution: Many of the ideas have very narrow definitions when they are measured in this book. So, before pushing a concept with a high effect size or dismissing something with a low one, be sure to read Hattie’s commentary and really understand what the studies have shown.

    Byron Geoffrey Farrow (Suzhou, China) –

    Evidence, evidence, evidence ….,
    It’s the evidence, stupid. Somewhere near the end of this magnificent and vital book there is a quote relating to the practice of medicine through the ages. To paraphrase it refers to the development of medicine throughout most of recorded history as a bloody progression of trial and error (generally in that order and with those effects), where the opinions of influential thinkers tended to hold sway for millennia, and possibly the least scientific enterprise possible – for most of the last few thousand years, if you want to get better … avoid a doctor! Only with the advent of evidence based medicine and clinical trials did the avowed aim of making people better start to be met.

    Only now is education starting to emerge from this pre-scientific dark age. ….

    But somewhere in the last few decades people started doing real, scientific, evidence-based research on what works in teaching and learning. Individually these studies may sometimes be limited and hard to work through, but taken collectively as a meta-analysis – as John Hattie has done here – certain trends become clear. Oh, and note that the title refers to achievement – that’s what matters, not what makes teachers or government ministers happy.

    However, a teacher’s time in the classroom is limited – so Hattie’s work allows us to select the most effective strategies to spend our time with.

    Medicine went from an immature profession to one based on evidence because the clients demanded it. Only with lots of pressure will it become a mature profession.

  • Danaher M. Dempsey, Jr. says:

    From the beginning: “Should teachers stand in front of the class and present the material to be learned? Or should learning be more dynamic, with students solving problems, either on their own or under the teacher’s guidance? Which approach yields the most student learning?

    Opinion on this question is deeply divided.”

    YUP the opinion is divided …. because educational leadership opinions are rarely based on all the relevant data and facts are often ignored.

    Certain types of structured active learning with strong teacher control work miles better than discovery learning or problem-based learning.

    From NMAP Foundations for Success
    page xxiii paragraph 27…

    27) Explicit instruction with students who have mathematical difficulties has shown consistently positive effects on performance with word problems and computation. Results are consistent for students with learning disabilities, as well as other students who perform in the lowest third of a typical class. By the term explicit instruction, the Panel means that teachers provide clear models for solving a problem type using an array of examples, that students receive extensive practice in use of newly learned strategies and skills, that students are provided with opportunities to think aloud (i.e., talk through the decisions they make and the steps they take), and that students are provided with extensive feedback.

    This finding does not mean that all of a student’s mathematics instruction should be delivered in an explicit fashion. However, the Panel recommends that struggling students receive some explicit mathematics instruction regularly. Some of this time should be dedicated to ensuring that these students possess the foundational skills and conceptual knowledge necessary for understanding the mathematics they are learning at their grade level.

  • x says:

    I have issues with the statistics in this article. In valid research, for a result to be statistically significant (that is, most likely caused by the “treatment” and not by random chance), the result must be at least 2 full standard deviations above or below the mean of the data. According to the author, “Students learned 3.6 percent of a standard deviation more if the teacher spent 10 percent more time on direct instruction.” 3.6% of a standard deviation is not statistically significant and the differences in learning between the two groups cannot be attributed to the “treatment”. More (valid) research is needed to answer the author’s questions.

  • Erika Burton says:

    I cannot disagree with the results that teaching to a test yields higher test scores. What is still debatable is the students’ application of the concepts learned to new situations down the road. We know that direct instruction is effective at helping people to understand material in the short term but the memory does not hold onto information it deems irrelevant to one’s life.

    Do we want thinkers or regurgitation? That is the question.

    Erika Burton, Ph.D.
    Stepping Stones Together, Founder
    Empowering parental involvement in early literacy program

  • Dennis Paquette says:

    This article perpetuates a common misconception about inquiry. It implies that inquiry is simply a method for delivering content to students. The primary reason to teach with inquiry is to teach students the skill of inquiry. Problem solving is a skill students desperately need. Students will not learn that skill if we are simply filling them with content. They will only learn it through guided practice. This article and the study it was based upon compares how much content students learn and discounts that when learning through inquiry students gain the skill that I would argue is more important than any content we are teaching.

  • Donna Nicholson says:

    Is anyone recognizing for strong student learning that a portion of the class period( 10 -15%) is direct instruction with the remainder based on active student participation – small group work with teacher monitoring or differentiating instruction. A balanced approach to instruction and learning.
    In my past practice as a classroom teacher and now as a literacy consultant–this is what works. or in other words-gradual release of responsibility.

  • Thomas Andre says:

    John Clement’s comment, as well as others, captures much of what I was going to say. An effect size is .036 is miniscule. It could only be statistically significant with only an incredibly small variance in the test scores (unlikely) or a very large sample size.

    But the more important concern is the predictive validity of the standardized assessment instrument for the ability to use knowledge productively in the future to carry out real achievements. While commonly called achievement tests, most standardized tests would better be labeled partial indicators or partial predictors of achievement tests (PPATs). Real achievements are what people can do in real settings; design planes; construct computer software; write logically reasoned letters and blog comments; balance a checkbook; write novels, build furniture, critical analyze news reports and opinions, and a myriad of other ways in which knowledge can be used. Achievement tests typically test only a small sample of the behaviors that constitute real achievements and at best predict such real achievements modestly. Even if the .036 effect size occured in every instances of the use of these methods (an impossibility), were the correlation between the TIMES PPAT and real achievement in the .3-.5 range, the impact on real achievement would be much smaller.

  • Elaine Pietka says:

    Anyone that has taught 8th graders knows that you need to use a variety of instructional methods to be effective teachers of 8th graders. Direct instruction gives them the background knowledge they need to then problem solve or complete an inquiry based lab. College graduate students also need similar strategies along with AP Biology and high school students. Each instructional strategy has value; its the relationship and interactions of students and teacher that improves “learning”. There are many variables involved that haven’t been mentioned.

  • Marcus Sundgren says:

    An effect size that small makes me suspicious of the validity of the statistical method. With two stage sampling, as is the case of TIMSS and PISA, observations within a school or class can not be considered independent. In short, students within the same school tend to be more similar than students from different schools. If the intraclass correlations aren’t dealt with the sample size will be overestimated. If this is not properly accounted for the standard error will be underestimated, leading to statistical significance where there should be none.

    In the full report the authors state:
    “All estimation results take the probability weights into account and allow for correlation between error terms within schools. In addition, the two step procedure of sampling could be incorporated in the estimation of standard errors. For simplicity, we ignore the latter in the following analysis which then gives us conservative estimates of the standard errors.”

    I’m not confident enough in statistical methods to determine wether the authors have made a serious mistake in their analysis or not, but the quote above makes me rather suspicious about the results.

  • Jim Kopchains says:

    I feel that the debate about the two styles of teaching will never be resolved. There is just too much personality involved in each teacher’s decisions in presenting material. I believe that with no clear answer one way or the other, then teachers should have confidence and find a way of teaching that works for them.

  • kripik says:

    Substantial, engaging technical commentary on this board, but I’m left with a certain self-doubt. As I read the article, Peterson seems concerned with emphasis and time allotment in an instructional program employing varied methods for a range of objectives. He does seem to follow results based on suspect inferential procedures, vitiating the “research-based” aspect of his authority claim. Still, his concern would appear to me, at least, to be more a matter of methodological balance–a search for the right tools for a wide range of pedagogic “jobs”–than a quest for some sort of authoritarian “magic bullet.” In matters of research design (as a phase preceding and distinguished from data analysis), there is so much slack in observational standards that I’d regard anything short of jaw-dropping results as negligible noise in the system, useful primarily for purposes of the authors’ tenure and promotion at institutes of teacher education. (I think I agree with Meg’s remarks on this issue, and Barry Garelick’s comment above on “dominant pedagogic methods” in different countries goes to much the same point from a more practical and genteel perspective than my own.)

    James Garfield is not memorable for much, but his image of the university as “Mark Hopkins on one end of a log and a student on the other” stands up well for me. Regardless of pedagogic methodology, I want my ten-year-old daughter to pass her schooldays in the presence of liberally educated men and women who have mastered the subjects they care about and who respect the minds of young learners. In her formal schooling to date, nearly all of her best learning has arisen from informal interaction; the elegantly designed and executed lesson plans of her “highly qualified” teachers in one of the nation’s “lighthouse” school districts have produced listlessness at best, engendering ennui and alienation from school in general at worst. (They have also featured unrealistically ambitious, fabulously unsupported “independent and team-learning projects” thrown out into the livingrooms of the kids’ families, where well educated parents have assumed the Mark Hopkins role and made something valuable out of indefensibly inappropriate assignments.) One grows weary of an administrative system apparently blind to the pendulum swings of mystical faith in pedagogic methods, a faith based less on “evidence from research”–numbers are not “data”–than on the technological imperative that informs social policy in formally schooled societies generally. I have in mind here the concept defined by Lewis Mumford in these terms: “Western society has accepted as unquestionable a technological imperative that is quite as arbitrary as the most primitive taboo: not merely the duty to foster invention and constantly to create technological novelties, but equally the duty to surrender to these novelties unconditionally, just because they are offered, without respect to their human consequences.” Silvia Carranza aptly recalls one of those swings, when Madeline Hunter’s sound inferences based on extensive and disciplined observations were seized upon by administrators from sea to shining sea and distorted beyond recognition. Hunter’s remorse over her own role in that degrading “adoption” of her work is among the most poignant memories many of us carry of her neurological collapse and painful death. I trust that if she can look out on the current wave of asinine, reductive teacher evaluation programs–“We know what works!” as Michelle Rhee energetically reminds us–she must be shaking her head at the irrepressibility of human folly.

    I’d thought Peterson was looking beyond technomagical administrative taboos to find some sort of purposeful balance in classroom instruction, not trying to grind an ideological axe in the pedagogic theater of the school wars. As I say, his evidence seems slender to me, and his notion that inquiry-based instruction is less work than conventional lecture/discussion designs is disconcerting, but the quest for purposefully balanced looked laudable. And I suppose that if I had to commit my child to the ministrations of technically disciplined but poorly educated instructors, then assuming they’re going to insist on getting in the kids’ way so that they can lay claim to the credit for whatever “measurable achievement” the students may “produce,” maybe I’d rather have them lecturing than dissipating the kids’ efforts at intellectual integration with rubric-oriented social games. I thought Peterson was expressing some such preference, too, but perhaps I misread him. Maybe I should be reading with a different purpose: I’d certainly hate to overlook the pedagogic magic bullet when it turned up plainly in the latest research-based PD kit from ASCD . . . .

  • Ingrid Salim says:

    For me, there are three major problems with this ‘research.’ First, the data reflects ONLY U.S. students’ performance using TIMMS criteria, not global outcomes — by most accounts of the highest performing countries in math and science, investigative problem solving is at the core of instruction, while U.S. curricula reflects many ‘working problems’ (in math) and ‘activities’ in science. Which leads to my second issue, which is that what is being referred to here as ‘problem-solving’ activities (and indicated as being easier for the instructor) is not that at all. Throughout the U.S. science instruction is a hodgepodge of activities that don’t necessarily lead up to anything. True problem-solving involves investigation into why things work, and the working out of ‘models’ that explain observed phenomena. Such an approach is exhausting for every teacher who works to engage students in productive inquiry and levels of questioning without providing answers outright. Direct teaching, or summarizing of data and explanations of models absolutely plays a role, but is MUCH less exhausting than taking an active role in leading students to form ideas. Finally, as others have pointed out, these ‘data’ are self-supplied estimations of how much time was spent doing very vaguely defined activities, and the 3.6% difference is hardly a mandate to change practice. Better, it seems to me, would be to identify what KINDS of direct instruction are effective (and how would one measure that?) and what KINDS of reasoning and problem-solving activities yield deeper understanding (and how can those results be measured). It seems to me that this entire ‘research’ was undertaken to prove the efficacy of a particular, private approach. I’m not convinced.

  • John Myers says:

    In addition to not distinguishing between structured (with direct teacher support)
    and unstructured problem solving, the original study does lumps everything under “lecture-style”. Direct instruction in which presentation is a part, but not the whole IS powerful for basic skills, but it is NOT the same as a lecture.

    We might want to look at the syntheses of research by such folks as John Hattie and Robert Marzano who quite rightly are more accurate in their use of terms and the power and limits they see

  • Audra Harriman-Gomez says:

    The “research” in this article leaves much to be desired.
    As a middle school teacher I was offended by the claim that problem-solving classes “require less preparation and are easier to teach.” Really? Standing in front of the lecturn spouting information without interactingn with students’ minds is a much easier approach than actully guiding students in the process of investigation and synthesis of information. Not to mention preparation for the world of adulthood. What kind of society would we be if we readily swallowed what was fed to us rather than question our learning, seek solutions to quandries and exercise that part of our mind that actually makes deep, memorable connections that serve as life-long lessons? Being a “guide on the side” does not equate to taking a break from teaching while students perform guide-less tasks. It is a way of instilling skills that are not geared toward a specific test but toward those essential characteristics of being able to REASON and be a successfull adult. After all, isn’t that the ultimate test? Sadly, I cannot think of a statement less true regarding teaching than the last sentence of Peterson’s article. Sadly, this statement shows the results of looking at subjective rather than empirical evidence.

  • Ron Wilder says:

    Well, two aspects of this article stand out to me without even looking closely at the study. The first is that there is a lack of empirical data on the effectiveness of various teaching styles, and this simply is not true (see Marzano, Hattie, Stiggins, etc.). The second is the notion that teachers were self-reporting their direct instruction vs. problem solving. As a teacher, principal, and teacher supervisor for many years, teachers frequently misunderstand the nature of their instruction and often mistake direct instruction for student-directed or teacher facilitated learning. This single study does nothing to contradict the overwhelming data in support of a differentiated and larger teacher facilitated approach. For me, it only underscores the danger of articles like this, which less-informed educators will then quote to justify ineffective practices. It makes a great by-line for a magazine or newspaper, but does very little to further understanding about best practices.

  • Education Next says:

    Guido Schwerdt, one of the authors of “Sage on the Stage,” responds to the above comments in “Defending the “Sage on the Stage”” – http://educationnext.org/defending-the-sage-on-the-stage/

  • Fredricka Reisman says:

    Where does creative pedagogy and focusing upon student and instructor creativity come in?
    The country is in the midst of a global emphasis on creativity with evidence that creativity can be taught, and that instructors who teach to their students creative strengths produce active learners, more effective problem solvers and communicators, and students who are more engaged in learning. Creativity (the generation of original ideas) and innovation (the implementation of these creative ideas) underlie creative pedagogy.
    What is the relationship between creativity and academic achievement? Getzels & Jackson’s (1962) found that creative strengths were better predictors of academic achievement than were IQ scores. From the elementary level to the graduate level , creativity scores were either more effective or equivalent in predicting academic achievement.
    How can the theoretical foundation of creative thinking serve as a heuristic for creative mathematics pedagogy? What perceptions do instructors have of creative students? Is there a disconnect between i) instructor perception, ii) creativity research on characteristics of creative students and iii) mathematics pedagogy?
    Torrance (1965) from a study of over 1000 teachers in five countries, concluded that instructors may be unduly punishing students who are good at guessing/estimating, those courageous in their convictions, emotionally sensitive, intuitive thinkers, and those who are unwilling to accept assertions without evidence. On the other hand, instructors may be unduly rewarding students for being courteous, doing work on time, being obedient, popular and willing to accept the judgment of authorities.
    Thus, it is suggested that we look beyond the heart of Professor Peterson’s article.

  • Kevin McCormack says:

    I am very suprised that an educator like Paul Peterson would say: “I, too, like those problem-solving classes. They require less preparation and are easier to teach.”

    They should be as demanding if not more than a “chalk & talk” presentation. In a problem solving class the teacher needs to have their “spider sense” up and involved in every aspect of the classroom dynamic.

  • Jerry says:

    I have been teaching for 31 years- first at a middle school and now at a HS full-time, in conjunction with college teaching part time. I believe the best teaching style is probably for college its 80/20 lecture/”hands on”, for HS 70/30, and for MS 60/40.

  • Jack R says:

    Lots of comments, and opinions.
    However, if you really want to read some good statistical studies on different instructional methods, look at the reports from “Project Follow Through” — which involved thousands of students in districts across the country over an extended period.
    An initial reference is http://pages.uoregon.edu/adiep/ft/grossen.htm or http://darkwing.uoregon.edu/~adiep/ft/adams.htm
    You can read more at http://www.projectpro.com/ICR/Research/DI/Summary.htm
    This study was started in 1967, ended in 1995 … though it is not particularly well known in academia. “Constructivism” was not a term used then, as this was prior to the wide-spread use of Cole’s translation of Vygotsky’s work (and a poor translation, from what I’ve heard). “FT” (follow through) compared 9 models of learning in different content areas and levels of learning.
    Before one dismisses a current study for ‘small effect size’, you should read others with consistent findings. When a series of studies shows effects all in the same direction, even a small effect size contributes to the overall pattern.

  • Andrea Lynn says:

    Anyone with a basic understanding of their content can stand up and lecture for 50 minutes and of course they will cover more of the curriculum. Peterson: “They [inquiry-based lessons] require less preparation and are easier to teach.” Wrong. To prepare for inquiry-based instruction, a teacher has to be much more skilled and put forth much greater effort in order to select/craft appropriate questions, gather resources, provide a structure of support within the materials used to ensure student success, predict possible student misconceptions and ways to best guide them during the investigation, and how to best lead students to conclusions of greater significance beyond the lesson. Great teachers who know their subjects and their students well use this approach, not because it is easy, but because it engages students as they master the curriculum and develops in them the thinking skills they will need once the test is over. It is surprising and disappointing that a Harvard educational policy expert is so out of touch.

  • Doug says:

    The Harvard educational policy expert is citing research that is part of an increasing body of research that shows direct instruction works and works well. A lot of the comments in this discussion thread have used the word “lecture”. A lecture is not the same as direct instruction. Direct instruction is usually modeled on Gagne’s nine events of instruction and those are:
    1. Gain attention
    2. Inform learners of objectives
    3. Stimulate recall of prior learning
    4. Present the content
    5. Provide “learning guidance”
    6. Elicit performance (practice).
    7. Provide feedback
    8. Assess performance
    9. Enhance retention and transfer

    As you can see, direct instruction is not lecture.

    Additionally, we in education do “drink the cool-aid” from time to time. We buy into one philosophy as being “good” and characterize others as being “bad” and it is almost like a religious war happening. Whole language versus phonics. Direct instruction versus a variety of “student centered” constructivist methods of instruction. Direct instruction is “bad”, the others are “good”. This is a stupid thing we do to ourselves. There is a time and a place and a use for everything with different types of learners and different types of content.

    Educators need to stop believing in these false dichotomies of what is good and what is bad. Further, we need to stop preaching (word chosen purposefully) the superiority of one way over the other. This only perpetuates falsehoods and hurts students and teachers of all ages and at all levels.

  • Thomas Runes says:

    As someone whose job requires him to do more than a thousand classroom walkthroughs a year, these middle school teachers are liars. That’s the danger in self-reported data. The norm in middle school is to spend 90+% of the time on lecture as per Doug Reeves’ research and my real-world experiences. The reason that there was such a small difference is that they were comparing lecturers to lecturers who lied about doing PBL. They couldn’t possibly find enough PBL teachers to run this study because there aren’t enough of them out there.

    Dr. Runes

  • Ian C says:

    I believe that a balance of techniques is best. The issue at hand is that the standardized tests are what are being used to judge our effectiveness as educators. No one is interviewing our students to assess their problem solving ability. Therefore, many teachers who would put forth the time and effort to become effective at student-centered approaches succumb to the pressure of maximizing the already inefficient amount of time given to us to prepare students for these high stakes tests. Until the emphasis on testing is removed from the state, school systems and individual schools, and AYP can be measured in other ways, most teachers will rely on direct instruction heavily to save their jobs.

  • Larry Sandomir says:

    As an educator for 40 years, I can accept the debate between progressive education and and up front learning. Serious educators can be on either side and believe deeply in what they are doing. However, when Paul Peterson, both editor-in-chief and a member of Harvard’s world, says, “I, too, like problem solving classes. They require less preparation and are easier to teach … So I can easily understand why progressive pedagogy has proven popular. It’s more enjoyable for all concerned, even if sometimes you worry that you are not teaching very much,” I am outraged!

    First, if he has attempted progressive education in this manner, he is doing it incorrectly. Yes, of course it is more enjoyable, because teachers are interacting in deeply meaningful ways with students, which is one of the greatest blessings about being a teacher. However, it in no way requires less preparation and it is certainly not easier to teach. Further, I have never worried about teaching too little, because my students are constantly learning, growing, challenging themselves to go farther, venturing both vertically and horizontally, asking questions that allow for more profound discourse, and, in a far more permanent way than up-front lecturing allows, owning their learning moments.

    Second, planning and executing this way is far more involved, because a good teacher in this philosophy must keep in mind where each individual student is, what he/she needs at the moment, when to remediate and when to challenge, how best to attend to everyone in the group, and then, when that particular lesson ends, where to go next. It’s one lesson or unit times the number of students in the group.

    I can see how a mix of up front teaching and group work can be successful, for sure. However, in order to understand my students most effectively and for them to gain knowledge, skills, and a love of education that works best with each of their individual styles of learning, “problem-solving classes,” as Mr. Peterson puts it, are irreplaceable.

    Finally, Mr. Peterson’s condescending tone is offensive since he states his points as the way it actually is – and while I have come to be less and less surprised by those kinds of comments, especially from those who support current educational reform in the guise of the charter movement that depends on drowning students in data and punishment (a la the KIPP philosophy), they still represent a dangerous and absolutely false assumption/stereotype about progressive education. I have worked in very successful, high profile charters and while they regulate and control, they do not truly educate. Mr. Peterson does a true disservice in his Summer 2011 blog.

  • e-papierosy says:

    That’s the danger in self-reported data. The norm in middle school is to spend 90+% of the time on lecture as per Doug Reeves’ research and my real-world experiences. The reason that there was such a small difference is that they were comparing lecturers to lecturers who lied about doing PBL. They couldn’t possibly find enough PBL teachers to run this study because there aren’t enough of them out there.

  • Comment on this Article

    Name ()


    Sponsored Results

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform