Data-Driven and Off Course

Education Next Issue Cover

An English teacher’s view


15 Comments | Print | PDF |

Winter 2011 / Vol. 11, No. 1

While reviewing a practice passage called “The Night Hunters” for last year’s 9th-grade Florida Comprehensive Assessment Test (FCAT), I had to peek at the teachers’ guide to check my answer to this question: Which of the owls’ names is the most misleading?

I was stuck between (F) the screech owl, because its call rarely approximates a screech, and (I) the long-eared owl, because its real ears are behind its eyes and covered by feathers. The passage explains that owls hear through holes behind their eyes, so the term long-eared owl seemed misleading. Then again, a screech owl that rarely screeches? That is pretty misleading, too.

According to the FCAT creators, each question on the practice tests corresponds to a specific reading skill or benchmark. Teachers are supposed to discuss test results in afterschool “data chats” and then review weak skills in class.

Here is a sample conversation from a data chat, as imagined by promoters of this idea:

First Teacher: Well, it looks like my students need some extra work on benchmark LA.910.6.2.2: The student will organize, synthesize, analyze, and evaluate the validity and reliability of information from multiple sources (including primary and secondary sources) to draw conclusions using a variety of techniques, and correctly use standardized citations.

Second Teacher: Mine, too! Now let’s work as a team to help students better understand this benchmark in time for next month’s assessment.

Third Teacher: I am glad we are having this “chat.”

Here is a conversation from an actual data chat:

First Teacher: My students’ lowest area was supposedly synthesizing information, but that benchmark was only tested by two questions. One was the last question on the test, and a lot of my students didn’t have time to finish. The other question was that one about the screech owl having the misleading name, and I thought it was kind of confusing.

Second Teacher: We read that question in class and most of my students didn’t know what approximates meant, so it really became more of a vocabulary question.

Third Teacher: Wait … I thought the long-eared owl was the one with the misleading name.

At this point, data chats often turn into non-data-related gripe sessions.

When I interviewed teachers for See Me After Class, the unintended consequences of high-stakes tests came up most often among language arts teachers. They know that answering comprehension questions correctly does not rest on just one benchmark. Separating complex skills into individual benchmarks may well work in math class. Symmetry and place value, for example, can be taught independently of one another, and benchmark-based data may indicate which of these skills needs work.

Reading is different. After students have mastered basics like decoding, reading cannot be taught through repeated practice of isolated skills. Students must understand enough of a passage to utilize all the intricately linked skills that together comprise comprehension. The owl question, for example, tests skills not learned from isolated reading practice but from processing information on the varying characteristics of animal species. (The correct answer, by the way, is the screech owl.)

Unfortunately, strict adherence to data-driven instruction can lead schools to push aside science and social studies to drill students on isolated reading benchmarks. Compare and contrast, for example, is covered year after year in creative lessons using Venn diagrams. The result is students who can produce Venn diagrams comparing cans of soda, and act out Venn diagrams with Hula–hoops, but are still lost a few paragraphs into a passage about owls. When they do poorly on reading assessments, we pull them again from subjects that give them content knowledge for more review of Venn diagrams. Many students learn to associate reading with failure and boredom.

It is difficult to teach kids to read well if they don’t learn to enjoy reading. It is impossible to teach kids to read well while denying them the knowledge they need to make sense of complex material. Following the data often forces teachers to do just that.

Roxanna Elden is the author of See Me After Class: Advice for Teachers by Teachers. She teaches high-school English in Miami, Florida and is a National Board Certified Teacher.

Comment on this article
  • Jennie Smith says:

    Roxanna is right on point in this article. “Reading” is indeed not made up of discrete skills that can be practiced in isolation; furthermore, as she points out, so much of reading comprehension relies on having background knowledge that it doesn’t even make sense to talk about “reading” as if answering questions on a text about astrophysics and getting information from an ad in the newspaper were the same thing. Once students can decode, their biggest problems in reading comprehension are a) lack of background knowledge on the subjects they are reading about, and b) a short attention span. Practicing “reading skills” in isolation will not build background knowledge (as Roxanna says, in things like science and social studies, which the passages in the tests usually cover), nor will they expand their attention spans.

  • T. Lewis says:

    Excellent article! I wish parents realized that we are shortchanging their children when we focus on 10 -14 paragraph “stories” to prepare them for standardized tests.

  • Ana Cristina says:

    Great article. I think a student’s reading ability has a lot to do with the way his/her family approaches the act of reading. It’s harder to help students with such skills as decoding if they don’t have much reading practice at home. And when teachers are forced to “teach the test,” little is done to address students’ specific needs.

  • David Henry Sterry says:

    Roxanna rocks! i have long been confused & dismayed by this test mania, and feel it has absolutely set American education back decades. thanks Roxanna for being a voice of sanity in a sea of madness.

  • George says:

    Reading has sadly become a utility-driven exercise in this information-age. Students today have been conditioned to digest short text-message length passages that when confronted with full-length passages, many simply fall “asleep” mentally. In order to read full length passages such as those “boring” types presented on FCAT tests, students must be experienced in a variety of prerequisite mental skills involving sustained attention, imagery, and critical reasoning on process to name but a few. Sadly, those skills are rarely practiced today by the average teen when texting, “IMing”, chatting, or surfing on the net. As teachers, I think we’d better start focusing on what these kids are going to need in their day-to-day lives and less on the microanalysis of everything in the classroom which only serves to waste their time even more, especially at the high school level.

  • Tara says:

    I completely agree that teachers are being forced to focus on data-driven instruction and “test prep” curriculum. Every year the majority of my students come into my class and cannot even read at grade level. Students struggle to comprehend passages that are too hard and understanding high level vocabulary is frustrating and defeating. The focus and stress on these state tests are taking away from the basic foundation students need for a well rounded education.

  • Maite says:

    Couldn’t have said it better myself, Ms. Elden … and have been saying it for years!

  • Melinda Ehrlich says:

    What I find amusing is all the technical rhetoric. Show me a kid who understands that he is “synthesizing, analyzing and evaluating the validity and reliabulity of information from multiple sources,” and I’ll show you a wise, old owl aka curriculum writer. Labeling these skills and asking students to be able to identify them is like visiting an Alzheinmer’s patient and expecting him to remember that you visited. It’s for the teacher or, in the case of the Alzheimer’s patient, for the concerned family member.

  • Diana Senechal says:

    Thank you for this important piece. This should make anyone wary about programs like the “School of One,” which uses software to determine students’ skill needs. The software gathers “data” during the day (from students’ answers to multiple-choice questions) and then chooses lesson plans for teachers to deliver the following day. Students spend class time working alone, working in groups, interacting with virtual tutors, playing “educational” computer games, and participating in short lessons. Teachers find out by 8 p.m. which lessons they are supposed to give the following day and to whom. They may modify the lesson plans, but when you’re supposed to give five different lessons to five different groups, there’s only so much you are going to modify. So far, it is being used for math only, but I can’t see how it would be good for complex math topics. And it teaches students to rely on constant stimulus–the computer animation, the changing activities, etc.

    Why are we giving up human judgment and complex topics, all in the name of data?

  • Anita Mattos says:

    The problem is not with using data to drive instruction, the problem is with the state test they are being told to use for this purpose. If this same set of teachers would together create a test (or performance with rubric, etc) to assess what they know to be essential skills, then the data they gathered from that assessment would be meaningful and would help them plan and improve their next lessons. To use a cliché, “Don’t throw the baby out with the bath water.” Data should help inform instruction, but it needs to be from a reliable and respected source – namely, the teachers who know students best.

  • Data is Good says:

    I don’t agree with the premise of the criticism. The complaints are based on the test questions themselves not the information the tool of data-collecting provides. If the questions are confusing (1st teacher) then thats a bigger issue than data based grading or holistic grading. If High-school students don’t understand the word “approximates” (2nd teacher), this should be the issue we should be worried about. Data should be used as a tool to help identify, correct and address these issues. Once the evaluation process is cleaned of these errors, you can begin using the data to analysis performance. The issue is not as black and white as everyone here is claiming. Without this formalized approach, every student will be graded on “opinion” of a teacher’s judgment. Let’s be honest, there are good teachers and bad teachers. Standardized grades should not be the final say on the matter, but dismissing the importance of data is a way of handing our children’s education to Neo-Luddites. The world is competitive and we can’t afford to do that.

  • Ric Seager says:

    Roxanne should relax…..

    She clearly gets ‘it’ about data-driven instruction, and her article – through her actions and analysis – proves the importance of data-driven instructional improvement, even as she fixates on the wrong conclusion.

    The ‘it’ is that the data is irrelevant. Data has been gathered and housed by schools decade-upon-decade. Most schools have needed to convert their vast repositories of data to microfiche, then to digital format, and now have to store it in the cloud, as they cannot afford the costs associated with warehousing even DVDs of all their archived data.

    Data is not going to change anything in schools

    Analysis is the key!! Analysis is what is important!!

    That the conversations in her ‘Data Chats’ degenerate into gripe sessions only reveals the need to use protocols and norms at their meetings. It does not invalidate the revalations that were part and parcel of this ‘chat’, and it most definitely does not invalidate data study as a crucial component of instructional improvement. Roxanne’s team got the ‘it’, but then proceeded to draw the wrong conclusion.

    Indeed, state assessments are NEVER going to give you the information you need to measure instruction; they only measure programming – and weakly at that. To measure instruction, you MUST use different assessments that are more closely tied to instruction and expectations in the classroom where instruction took place.

    If the conclusion is that the assessment tool is insuffient, given the complexity of the task, then the next question should be, ‘How can we better measure this skill, to assure our kids actually have mastered it?’ Needless griping about the tool will fix nothing, and they still don’t know if their students have mastered the skill.

    The answer: Build a better assessment! Which, by the way, is a perfectly valid conclusion from such ‘Data Chats’. When the state comes to say you are not achieving, you will have the evidence to prove the absurdity of their assessment item AND you will know your students have mastered the skill. Of course, you may find that students still have not mastered the skill after using the better assessment, in which case…..

    Better yet, build an assessment profile that shows student achievement across the state test, a nationally norm-referenced assessment and a district-level, curriculum-based performance assessment of the skill. Then analyze achievement across this profile, rather than a single multiple-choice test.

    Drawing and fixating on the wrong conclusion does not invalidate the process. Especially, given that all the information you need to draw the right conclusion is properly in front of you. Instead, ask your principal for QUALITY training in Professional Learning Communities – then you will build the skills necessary to draw better conclusions every time.

  • Cameron Evans (@educto) says:

    I like Roxanna’s observation. I disagree with the point that the data is irrelevant. I would assert that the is merely incomplete and the data chats revealed as much.

    In the scenario, the assessment didn’t capture the metadata around the question. It only looked for an absolute answer on a couple of measures. The assessment data did not indicate whether the required skills needed to solve the problem were observed somewhere else in the assessment…only that the ability to synthesize information was not present for all students. This data should also be compared to classroom performance in a composite view to see if the classroom learning is not at the same rigor as the state benchmark assessment.

    Building a better assessment would require better linkages to all of the skills and prior knowledge needed to achieve a positive outcome. Until we do so…teachers may continue to do a lot of off-roading with the data views they have available.

  • Catherine Caldwell says:

    Ms. Elden’s piece was the most thought-provoking in the entire Winter issue, but sadly, it was at the very back of the magazine. Is that because she’s a teacher and not a policy wonk? Ms. Elden deftly explained important problems with data-driven reform efforts. Policy-makers and analysts would do well to analyze her comments carefully. Ms. Elden’s critics seemed to miss one of her key points: that math is better suited to data analysis and to the teaching of discrete skills than reading. After teaching 7th grade language arts for ten years, I know this to be true.

    I have also experienced firsthand the withering feeling that can overcome a teacher during a “data chat.” Neither meeting norms nor professional learning training can change the fact that this meeting is keeping you from preparing for your next class or answering parent emails or copying the assessment you and your colleagues just created (not to mention the time spent on scanning into the computer the assessment you just gave!). If you’re feeling exhausted and harried, griping might be all you have the energy for. Yes, we created our own assessments, too, but tired teachers can create bad questions themselves. Reading skills are particularly difficult to assess, as Ms. Elden so effectively demonstrated.

    Reformers and data-lovers would do well to keep the realities of the public school classroom firmly in mind. I went into teaching after 17 years of lawyering, believing that I could play a useful role in education policy and reform. As it turned out, the public school classroom had a lot to teach me! For starters, I had to make my own copies for every test and hand-out — and I was still working 12-hour days to prepare meaningful, engaging lessons and to edit and assess my students’ writing. Oh, but wait, there was more: find quality passages to use in assessing my students, create quality questions, copy the test, grade the test (a slow process if you used anything other than a multiple choice format), scan in the results to your school district’s expensive new data-crunching software, prepare your analysis for your data meeting (which your principal will be attending and watching closely) — and try squeeze in a moment to rush down to the guidance counselor’s office with a student journal entry that sounds suicidal. These are the realities that the most dedicated teachers face daily.

    When a thoughtful and obviously-dedicated teacher like Ms. Elder speaks up about inherent difficulties in data analysis of reading skills, policy experts should humbly and carefully attend.

  • […] though, are the many examples of superficial and faddish uses of data (h/t Core Knowledge). Rick Hess, in a smart 2008 Educational Leadership […]

  • Comment on this Article

    Name ()


    Sponsored Results

    The Hoover Institution at Stanford University - Ideas Defining a Free Society

    Harvard Kennedy School Program on Educational Policy and Governance

    Thomas Fordham Institute - Advancing Educational Excellence and Education Reform