Fueled in part by the U.S. Department of Education’s Race to the Top program, a massive effort to overhaul teacher evaluation is underway in states and districts across the country. The aim is to ensure that evaluations provide a better indication of “teaching effectiveness,” or the extent to which teachers can and do contribute to students’ learning, and then to act on that information to enhance teaching and learning.
In October the National Council on Teacher Quality reported that nearly two-thirds of the states made changes to teacher-evaluation policies over the past three years, a stunning amount of policy activity in an area that had remained nearly stagnant for decades. Today 25 states require an annual evaluation of teachers—up from 15 two years ago—and 23 states now require evaluations to at least consider “objective evidence of student learning in the form of student growth and/or value-added test data.”
So far most of the public debate about such reforms focused on the technical reliability of the techniques being used to measure effectiveness, especially value-added estimates of teachers’ impact on student learning. Value-added measures rely on statistical models that examine the difference between the actual and predicted achievement of a teacher’s students given their prior test scores, demo- graphic characteristics, and other measures in the model.
But as states and districts actually begin to adopt policies to measure teaching effectiveness, another kind of debate is now raging: How exactly should school systems use the results of their new teacher-evaluation systems? More broadly, once states and districts begin to measure effectiveness, what kinds of strategies should they adopt to increase the amount of measured effectiveness in the teacher workforce over time?
In November the Education Writers Association held a seminar on teacher-evaluation reforms for nearly 50 education journalists. The following day Julie Mack of the Kalamazoo Gazette blogged about the top “take-away messages” from the event, which featured leading reformers as well as officials from teachers unions. “A point stressed repeatedly,” wrote Mack, was that “the real point of this reform is not punitive, i.e., firing bad teachers.” Instead, she had heard, “It’s about providing teachers with better feedback, as well as the tools and support systems to help them improve.”
If so, that point seems to have been lost on state legislators. Among 17 states that the National Council on Teacher Quality examined closely for its report, 12 had adopted policies for using evaluation results to inform decisions about teacher dismissal, layoffs, or tenure. At the same time, “Many states are only explicit about tying professional development plans to evaluation results if the evaluation results are bad.”
Experts observe a similar trend at the school-district level. According to Education Resource Strategies, a nonprofit organization that works with urban districts to improve use of resources for teaching and learning:
Even when districts and schools have good evaluation information, they usually use it narrowly, focusing primarily on remediation and dismissal. These districts are missing an opportunity to ... help leverage their highest performers and help teachers with strong potential grow into solid contributors.
Underneath the confusion about what the reforms are really about lie two very different types of strategies for boosting teaching effectiveness in the workforce. The first strategy can be called “movin’ it” because it treats a teacher’s effectiveness as fixed at any given point in time, then uses selective recruitment, retention, and “deselection” to attract and keep teachers with higher effectiveness while removing teachers with lower effectiveness. The resulting “churn” in the workforce raises the average level of effectiveness over time. State policies that base decisions about tenure, layoffs, and dismissal on results of the new evaluations are all “movin’ it” strategies, as are any financial or other incentives to attract or retain highly effective teachers.
In contrast, “improvin’ it” policies treat teachers’ effectiveness as a mutable trait that can be improved with time. When reformers talk about providing all teachers with useful feedback following classroom observations or using the results of evaluation to individualize professional development for teachers, they are referring to “improvin’ it” strategies. If enough teachers improved their effectiveness, then the accumulated gains would boost the average effectiveness in the workforce.
In reality, there is nothing about either strategy that precludes the other. Therefore, instead of treating them as “either/or” choices, smart school systems would combine “movin’ it” and “improvin’ it” policies to maximize increases in teaching effectiveness. In fact, evidence suggests that high-improving and high-performing schools manage to do just that.
Yet some of the nation’s most influential “movin’ it” proponents repeatedly argue that investing in “improvin’ it” strategies would be a waste. They cite research showing that professional development does not significantly improve teaching effectiveness and student learning, and they argue that even if there were good approaches, school districts would not know how to implement them reliably at scale.
Those skeptics have a point. There are very few convincing studies showing that professional development works, and two federally sponsored experimental studies of well-designed programs yielded disappointing results. Yet over the past two years, respected researchers have begun to publish a new crop of well-designed studies that do show substantial improvements in teaching and learning from some forms of professional development.
Policymakers at all levels should seize the opportunity to move beyond the false choice at the heart of this debate and encourage school systems to maximize gains in teaching effectiveness by leveraging a combination of “movin’ it” and “improvin’ it” policies. But that will require leaders at all levels of education to finally confront the long-known fact that the nation’s school systems spend billions of dollars annually on wasteful and ineffective professional development.
Federal and state policymakers should incentivize school systems to eradicate ineffectual and unproven professional development and invest in proven models. And because even good models can run into implementation hurdles, they should ask school systems to describe how they will anticipate and prevent hurdles while supporting, overseeing, and monitoring professional development to ensure that it gets the results it should.
Districts should conduct comprehensive audits of all of their investments in professional development to determine whether each investment, and all investments taken together, provides real opportunities for teachers to improve—no matter what their current level of effectiveness. Finally, states and districts implementing new evaluation systems should take every step possible to ensure that the feedback teachers receive from evaluations is as valuable as teachers have been promised. If reformers and education leaders fail to deliver on even that very basic pledge, the current “big bang” of teaching-effectiveness reforms could very well collapse in a “big crunch.”