NAPLAN, MySchool, PISA: What’s wrong with standardised testing?

By Buffy Moon

22 August 2017

Coming from a family in which academic test scores were a compulsory topic for discussion at every Christmas and birthday gathering, with an older brother who was a finely-tuned high stakes test-taking machine, I already had a few feelings about standardised testing coming into this workshop.

The words ‘standardised test’ instantly conjure vivid childhood memories of opening A4 envelopes, heart racing in anticipation of seeing my mathematical, writing or reading ability represented as a dot on a scale or bell curve.

It took me years after leaving school to discover and extract the deeply felt belief that my primary (if not only) worth was my ability to achieve high academic results.

Having had such a fraught experience of school, I never imagined myself as a teacher. Yet my passion for learning, history, politics, languages, literature and the joy of working with children gradually pulled me in that direction.

Teaching my first class of newly-arrived English as an Additional Language (EAL) students last year, I desperately wanted to shield them from the experience of seeing themselves placed below whatever position on the ‘achievement scale’ they had been taught to see as acceptable. Yet to my dismay, they all wanted to do NAPLAN!

‘You don’t have to do NAPLAN,’ I explained. ‘It is not compulsory for students who have been in Australia for less than a year.’

‘But we can do it, Beth!’ they insisted. The whole school was buzzing with talk of NAPLAN. Everyone was preparing and they wanted to be like all the other students.

I was impressed by their determination, and the last thing I wanted was for them to think that I had no faith in them. So I decided to offer them a trial Language Conventions test to give them a better idea of whether they wanted to sit the real thing.

A large part of this test involves identifying and correcting misspelt words. I assumed that the tedium of this experience alone would be enough to turn them off NAPLAN. For some, I was right. For others, it wasn’t until they were marking cross after cross on their practice test papers as we corrected them together in class that they began to rethink their initial enthusiasm for NAPLAN.

I was pleased to have regained time with my students to do work with more real-world relevance than NAPLAN prep, but I felt sad to have dashed their hopes of being able to present themselves to their families as high-achieving dots on a NAPLAN results report. At a deeper level, I was ashamed to belong to a society and profession that taught children to see learning and achievement in such dehumanising terms.

There are many lenses through which a standardised test such as NAPLAN can be critiqued, and the workshop organised by Fiona Taylor and Lucy Hunan offered a range of interesting viewpoints. The first presenter, Honorary Fellow at Victoria University, Neil Hooley, asked us to consider the purpose of schooling in assessing the value of standardised testing. Is the role of the school teacher simply to present students with pre-given knowledge and assess their ability to repeat it? Or should teachers provide students with opportunities to construct new understandings based on their own experiences (as progressive educational theorists like Dewey, Vygotsky and Freire affirmed)?

If students are genuinely constructing new understandings, of what value is a predetermined set of criteria with which to assess them? Even if we accept that part of school should be about acquiring knowledge constructed by people in the past, isn’t the most important thing how students adapt, apply and build on that knowledge? And what can standardised tests really tell us about their ability to do that?

The second presenter, teacher educator David Hornsby, offered a more grounded critique of NAPLAN. Drawing on the fascinating, yet easy-to-read research of education academic Margaret Wu, David explained that NAPLAN tests are unreliable measures of even the limited areas of learning they claim to assess: literacy and numeracy.

Based on the calculations of ACARA (Australian Curriculum, Assessment and Reporting Authority) obtained by Wu under the Freedom of Information Act, random fluctuation of test scores for an individual student is estimated to be ±7 score points, which is more than one year’s average growth. That means that a student’s results on two tests administered one year apart could show no growth or three years’ growth, just through random fluctuation of test scores.

This random fluctuation is produced because of the limited number of questions that can be asked in a single test under the sweeping banners of “numeracy” or “literacy”. It is simply not possible to test all the skills students are expected to have acquired at a particular year level, and students will inevitably perform better on one set of questions than another.

This is quite a different explanation of NAPLAN test score variation than that offered by the brochure ACARA provides to parents, which suggests that unexpectedly low scores may be caused by “illness or other distractions”. It never mentions that random fluctuation of scores is inherent to the test itself (as that could undermine the government’s decision to spend millions of dollars developing and administering these tests).

Another issue David raised that applies to all tests administered within a limited timeframe is that such tests advantage shallow thinking and penalised slow, careful thinking, as students who take the time to think about questions deeply do not manage to finish.

Drawing in questions of teacher rights, Deputy President of AEU Victoria, Justin Mullaly, argued that standardised tests like NAPLAN marginalise the role of teachers in determining what we assess and how. Justin then highlighted the links between the standardisation of curriculum and commercial interests in Australia and internationally. Companies generate huge profits through the testing industry, as schools and parents scramble to give their students and children a competitive advantage over their peers.

Justin also argued that NAPLAN is a diversionary tactic of the government, as it provides a means to blame individual schools for poor student outcomes, rather than recognising disadvantage as a product of funding disparities between schools and socio-economic inequalities between students.

Finally, Justin explained that by creating the appearance of a crisis in school performance, governments can provide justification for further privatisation of the public education system, as has occurred in the US.

In the discussion that followed, participants offered a range of insights based on their experiences as teachers, student teachers and parents. We heard how schools are finding the surest way to improve their NAPLAN and VCE scores is by excluding “underperforming” students from the tests, or from the school altogether.

For me, one of the most interesting questions to be generated was around how we, as teachers, can answer calls for schools to be more accountable to their students, parents and the public, while rejecting the practice of standardised testing. Teacher and education researcher Sophie Rudolf affirmed the value of engaging students in discussions about how they know they are making progress in their learning.

Many participants agreed that students should be supported to monitor their own learning and present their achievements to their parents, peers, and broader community. This kind of assessment is empowering for students, parents and teachers alike, and helps everyone to see academic achievement as more than just a test score.