Over the years in which we’ve been covering the evolution of performance management, it’s always been abundantly clear to us that great care needs to be taken in determining the right things to measure. We’ve long suggested that states and local governments need to consider the set of performance measures they use as a work in progress—and be prepared to refine and alter them on a regular basis.
With that in mind, we were struck by a working paper, published by the Anneberg Institute at Brown University (to which we were alerted in an article in Education Week) that questions the value of linking school principals success to test scores.
As the paper concluded: “We find that using contemporaneous student outcomes to assess principal performance is flawed. Value-added models misattribute to principals’ changes in student performance caused by factors that principals minimally control. Further, little to none of the variation in average student test scores or attendance is explained by persistent effectiveness differences between principals.”
That’s pretty strong language.
The Education Week article puts this finding into context with a quote from Brendan Bartanen, lead author of the study and an assistant professor of education policy at the University of Virginia. “This is not a study that says that principals are not important. Principals are absolutely important,” he said, adding that “we need to be very careful about trying to infer the performance of a principal on the basis of the [test-score] outcomes of students.”
One of the paper’s major findings was that principals – while they hold a great deal of power over the functioning of a school – are not the primary drivers of test scores. As a result, they shouldn’t be penalized if test scores drop or rewarded if they rise, as is sometimes the case.
As the paper said, “Specifically, while we find meaningful within-school variation in student test score performance when comparing across principals . . . this variation is driven by transient school factors that are likely to have occurred regardless of who was leading the school. Because these school factors exhibit some persistence across years, they create the illusion of principal effects. . . “
We believe that it’s inevitable that measures like this one are frequently used – without necessarily proving that they’re the best ones – simply because the data is so easy to come by. There’s a powerful incentive to the people who have the task of coming up with measurements to grasp at metrics that are seemingly meaningful and are easy to find without a more complicated process like seeking input. That’s understandable but it’s a flawed approach and is reminiscent of this old joke:
So, a man came across a friend of his searching in the street for something he had evidently lost. “What’re you looking for?” the friend asked.
“Oh, and did you drop it on this block?”
“No, I think I lost it in the alley, but the light is better here.”
#PerformanceManagement #CityandCountyManagement #PerformanceMeasurement #AnnenbergInstitute #EducationWeek #SchoolPrincipals #TestScoreValidityinPerformanceMeasurement #MeasuringPrincipalSuccess #BrendanBartanen #TestScoresasaPerformanceMeasure #SchoolPerformanceMeasurement #PerformanceMeasurementPenaltiesandRewards #PerformanceMeasurementSelection #StateandLocalGovernment #StudentOutcomes #SchoolPerformance #SchoolPerformanceMetrics #BrownUniversity #PerformanceMeasurementPitfalls #StateandLocalGovernmentPerformanceMeasurement #StateandLocalGovernmentPerformanceManagement