Monday, January 16, 2012
Educational Moneyball: Why Value-Added Analysis Will Fail
My students are all on the far side of screwing up in school and if you ask them what went wrong in their education, their answer is almost always a variation on a theme. Somewhere along the line, they each ran into the teacher who wrecked education for them. The teacher that called them an animal or the teacher who was never there, or the teacher who told them that they were stupid or (more common than you would think, even in South Los Angeles) the teacher who told them that they weren't going to succeed anyways because they were poor and black or poor and brown or poor and from a Spanish-speaking home.
A bad teacher most likely won't destroy the life of a student who isn't already riding wildly along the rim of disaster, but a shocking number of kids are doing just that and a bad teacher can push them right over the edge.
And all of us in the profession know that a single good teacher is rarely enough to pull them back.
Everybody agrees that teachers matter. Teacher's unions say that teachers matter. Experts say that teachers matter. Students say that teachers matter. Parents say that teachers matter. Left, right and center all acknowledge the fact that bad teachers do damage and people agree that good teachers can change lives.
But the question remains, how do we legitimately judge teacher efficacy?
A few days ago, a wide-ranging longitudinal study on the long-term effects of teacher quality was released by the thinking people of Harvard University. This study makes the claim that a single effective teacher in 4th grade can increase a student's lifetime earnings by 1.5% and a single ineffective teacher can result in damage equivalent to missing 40% of a school year. The study used the value added metrics first popularized by the Los Angeles Times and then partially adopted by the Los Angeles Unified School District to determine teacher quality.
Essentially, value-added is a statistical model that incorporates year-by-year gains for individual students on standardized tests and then regulates for factors such as race, economic station, transiency, English-language fluency, previous schools, and gender. It is only useful for longitudinally tested subjects which currently are only math and English. It is an effective measure of testing improvement for students in 3rd through 11th grade if one believes that the only valuable measure of success is student improvement in math and English testing.
And the truth is that it does, indeed, help show that some teachers are more effective than others. It is the educational equivalent of Sabermetrics and now we have a generation of scholastic Billy Beanes who think that they've found a solution.
But much like Beane's Oakland A's, value-added is not the tool that is going to win the last game of the season.
Math and English scores may be the on-base-percentage of the teaching world, but there is much more to both both baseball and teaching than simple numbers because in both cases the numbers are overwhelmed by the simply unquantifiable human element.
A value-added analysis will not show which teachers were able to get their students to think and a value-added analysis will not reveal which students were inspired. A value-added analysis is not applicable for those who choose to work with untestable children in special education or alternative settings and a value-added analysis cannot touch subjects such as social studies, art, business, or the sciences.
Oh, and value-added analysis relies on standardized tests that most people agree are deeply flawed.
A reliance solely on value-added analysis, as is the case with the Harvard study, ignores the fact that education is a group effort. Just as sabermetrics cannot assess team chemistry, value-added cannot parse the influence of other teachers who have helped improve the literacy and numeracy of the tested teacher's students. Value-added analysis assumes that education happens in a vacuum and that no other person is meddling in the minds of students. But my students are learning from a half-dozen other teachers at the same time they are learning from me and my efficacy is as much a reflection of their efforts as it is of my own.
This is why value-added cannot be the winning formula in education.
But we still need to be able to determine which teachers are effective. We all agree. I have my own thoughts on this (please read -- I think my "1% for Teachers" plan is a real solution), but right now I am much more concerned that some of our greatest thinkers are already satisfied that they've found an answer in the value-added analysis.