You are subscribed as | Unsubscribe | View online version | Forward to a friend |
|
||||||||||||
|
||||||||||||
|
||||||||||||
|
||||||||||||
Dr Eric Crampton | Head of Research | eric.crampton@nzinitiative.org.nz | ||||||||||||
This week’s debate around the Initiative’s report on building successful schools reminded me of watching people trying to drive cars on those old dirt tracks. Martine Udahemuka travelled to the UK, Boston, New York, Washington DC and Houston to see how different places turned around failing schools. In DC, she learned about the IMPACT system. Principals evaluated teacher performance on a range of measures, including in-class observations of practice and measures of how well teachers assisted each other. But they also relied on student performance data. It would be stupid to base a teacher’s performance assessment solely on how well students performed on end-of-year tests. Every classroom has a different starting point, so you would wind up unduly rewarding or punishing teachers for winning or losing that year’s classroom lottery. DC’s system instead measured what students knew at the start and end of the year. The students’ improvement was then part of the measure of teacher performance. But it was even more clever than that. Students’ home situations can affect not only their starting points but also their ability to pick up new knowledge over the course of the year. So the DC system also took a lot of these differences into account. New Zealand’s debates around teacher performance are stuck in tired old ruts that do not fit more modern and better ways of assessing performance. The groove on the right side of the track insists that performance measurement is vital in every other kind of employment and that teaching is no different. The groove on the left points out that basing teacher pay solely on end-of-year test scores yields perverse outcomes. Both sides of that debate are entirely correct. Old performance appraisal debates based on old performance measurement systems fit those old ruts. But they do not fit when talking about systems like DC’s. The Initiative’s coming third report will show how New Zealand could do even better than Washington DC. Stay tuned. We’re going to pave that old dirt track. |
||||||||||||
|
||||||||||||
Amy Thomasson | Mannkal Scholar | ||||||||||||
But one philosophy we at the Initiative hope remains uncontroversial is that improving access to quality education will create a better New Zealand. The main obstacle? New Zealand is in denial about the pervasiveness of school failure. Failure has become the status quo in some schools, and our ignorance to this is in turn failing New Zealand’s students. That is why The New Zealand Initiative’s latest report: Fair and Frank: Global insights for managing school performance looks to the UK and the US to inform the debate about how New Zealand could identify and respond to failing schools. Though we can all agree that providing more children with a better education is a respectable goal, there are a lot of differing opinions about how to achieve that goal. This was abundantly clear in the response to Fair and Frank. Too many involved in education policy discussions have forgotten why they are having them in the first place – the students. Much of the ensuing debate after our launch focused on how a system that rewards good teachers would disadvantage teachers. Others yet have taken issue with importing ideas from places that have imperfect education systems. The Initiative did not choose the countries included in the report because they have the best test results, but because they have implemented innovative ways of identifying and reforming failing schools – challenges New Zealand knows very well. And here is at least one inconvenient truth: we are by no means perfect either – New Zealand’s average maths performance in the Trends in International Mathematics and Science Study is the lowest in the English speaking world. New Zealand looks to other countries for guidance in a range of other areas. Take rugby, for example – once an alien concept from the far away shores of the UK, we now have the best team in the world. So why should education be any different? We all agree that a high quality education is the key to unlocking the potential of young students – our future doctors, engineers and administrators. Now let’s have a debate about how we can get better at delivering it. |
||||||||||||
|
||||||||||||
Jason Krupp | Research Fellow | jason.krupp@nzinitiative.org.nz | ||||||||||||
Undercutting his piece was the message that if experts just used smaller words, or spoke slower, disasters like the Global Financial Crisis and Brexit could have been avoided. With all due respect to Mr Eaqub, who I know on a first name basis, a great dollop of humility might be a better tonic. That’s because experts get forecasts wrong way more often than they get things right. Let’s take the examples used in his column, which focuses on economists. Nouriel Roubini is famed for spotting the US housing crisis and the resulting financial collapse in 2008. But Mr Roubini has made a myriad of predictions, the vast majority of which have landed on the wrong side of the fence. His one successful prediction seems to be heavily aided by hindsight. Then there is Brexit. Ahead of the referendum it seemed as if every financial institution and their dog predicted Britain would be doomed if the public voted to leave the single market. The consensus was that as soon as the trigger was pulled, the economy would head the way of Greece and Argentina. How awkward for the gargle of letters (IMF, ECB, WEF, etcetera.) that Britain’s economy not only failed to tank immediately, but showed a marked improvement in the wake of the vote. The Wall Street Journal’s dart board has also consistently outperformed the best economic forecasters and stock prognosticators for decades. Which brings me back to Mr Eaqub’s point. Of course experts play an important role in public discourse. Indeed, by definition, they do have greater insight into technical topics than the layperson. Undoubtedly clearer communication and broader social awareness on their part would help their cause too. But if laypeople and politicians are to trust these voices more, experts need to admit when they get things wrong. They also need to be candid about economic predictions, which for all intents and purposes are in equal halves computer modelling and entrail reading. I’d suggest taking a leaf from Mark Carney. The Government of the Bank of England recently admitted in public he got his Brexit forecasts badly wrong. But then he added that it was his job to be a dour, glass half-full, shepherd of the British economy. Now there is an expert I’ve got time for. |
||||||||||||
On The Record | ||||||||||||
|
||||||||||||
All Things Considered | ||||||||||||
|
||||||||||||
|
Unsubscribe me please |
Brought to you by outreachcrm |