Civil Society

Teaching Kids How To Take Tests

By New Matilda

February 01, 2010

If an evil genius was considering ways to do significant damage to the quality of educational experiences that our children receive at school, then a public website that ranked and scrutinised individual schools on the basis of annual standardised tests would be a pretty effective (if expensive) way to go about it.

The Federal Government’s My School website allows the public to compare a school’s NAPLAN results — which are derived from an annual standardised test for students in years three, five, seven and nine — with national averages. It also makes the extraordinary claim that one can make fair "apples to apples" comparisons with what they refer to as "statistically similar" schools. This is accomplished by assigning to each school a value that reflects factors such as income, level of education, unemployment and remoteness. These factors, drawn from ABS data, "are highly correlated with student performance" according to the Federal Government.

As a curriculum support teacher I have the opportunity to work with many teachers across several schools. Normally, when teachers sit down to plan for the upcoming term, they are focused on how to engage their young students in learning, to connect to their interests, to encourage investigation, creativity and critical thinking across a range of key learning areas. Of course, these planning sessions always include lesson plans that explicitly address literacy and numeracy.

Worryingly, the priority for teachers of students in years three, five, seven and nine has increasingly become how to improve or sustain the performance of their students in the NAPLAN tests. Thus many teachers, for a significant part of the academic year, are less focused on quality teaching and learning and more focused on teaching to a test. Students will practice taking tests, learn test-taking strategies and rote-learn material that has little relevance outside the context of the NAPLAN tests. Students learn how to jump through the hoops that will lead to improved test results. The consequence of all this fuss is a potential improvement in test scores, but almost certainly a decline in the quality of the learning experiences of students in our schools.

Who could blame our teachers for teaching to the test? They are certainly under increasing pressure from parents, the media and governments to lift test scores — even more so now that those results are published online. Even a scratchy performance from one individual can lead to scrutiny from parents and, under this kind of pressure, a one-off bad performance is very possible. Indeed, I have seen year three students too terrified to enter the classroom on the morning of the first assessment. Also, with increased scrutiny comes the possibility of teachers bending the rules when delivering the tests — with little supervision this wouldn’t be hard to do — and this brings into question the validity of the test data.

And yet the My School site encourages parents to use NAPLAN data as the most important measure of a school’s worth and it encourages schools to compete against each other on a single, flawed measure. This is extremely unhealthy. It promotes a kind of NAPLAN arms race where schools will be tempted to abandon or diminish some of the great things they do in favour of whatever it takes to get better test results. This will not lead to good pedagogy — or a better learning environment. Many schools do extraordinary work under extremely difficult circumstances. They become the heart and soul of struggling communities and a haven for traumatised children who need safety and stability. They promote resilience, emotional intelligence and healthy eating. A school’s capacity to work these minor miracles is not reflected in the My School website.

Furthermore, the site divides schools into winners and losers. Thus, even if all schools were able to universally improve their test scores, there would still be a significant number of schools that would be labelled as having scores that are "substantially lower" than others. This is a disastrously unethical approach to improving teaching and learning.

Relying solely on NAPLAN data to measure a school’s worth is problematic for other reasons, too. The scheme makes comparisons between grade levels, not students of the same age. Queensland students, for example, are younger than the students in corresponding year levels in almost all other states and territories and are often, on average, a good six months younger than students in the high achieving states of Victoria, the ACT and NSW.

In fact, even as a measure of literacy and numeracy, NAPLAN is insufficient — yet in all the fuss around the website’s launch, few journalists have asked critical questions about the efficacy of the national test. How reliable is the data derived from a national test when there is no national curriculum? What are the limitations of standardised testing, particularly delivered as a one-off high-stakes assessment? And, most importantly: how does a high profile, national testing regime help students, particularly when this test is delivered in an increasingly competitive environment that pits states against states, and schools against schools? The failure to critically reflect on this last question has the potential to significantly diminish the educational experiences of students.

When introducing incentives and punishments into a system to attempt to improve identified outcomes, it’s wise to reflect on possible unintended consequences. A fellow educator told me the story of the council that attempted to improve its bus service by deducting money from the pay of drivers who returned late from their routes. This incentive had immediate success. The drivers started returning on time and were rarely late for the start of the next run.

However, on digging a little deeper, it was discovered that the bus drivers were failing to stop at all stops, passengers were left stranded, senior citizens and those with disabilities were neglected and morale was very low as no drivers wanted to take on the more difficult routes. Thus the unintended consequence of the council’s initiative was actually an overall decline in the quality of the public transport system.

There are lessons in this for the politicians who want to publicly scrutinise and compare schools based on one assessment instrument. It may be popular with the electorate but it will harm education.