Standardised tests can help to identify such pupils. They report a pupil’s performance on a much finer and more nuanced scale than levels.
By Daisy Christodoulou, Head of Assessment at ARK Schools
From the earliest days of national curriculum levels, we’ve known that levels conceal more than they hide. Pupils categorised as being a level 2 at age 7, for example, have been shown to have reading ages ranging from 5.7 to 12.9. Similarly, at age 11, approximately 25% of all pupils get a 5c on the KS2 reading test, meaning there is bound to be significant variation between pupils with the same level.
Lumping pupils into large, broad and quite vague categories like levels (or the popular ‘emerging, expected, exceeding’ model) practically guarantees that some pupils will be miscategorised. Pupils who genuinely struggle or excel at a subject will be placed in the same category as pupils who are much closer to average.
Take the example of a quiet, hard-working and well-behaved child who has an average level. She may still have great difficulties with reading, but have developed coping mechanisms which hide her struggles. Because she is so diligent and has an average level, a teacher may not pick up on her struggles.
Standardised tests can help to identify such pupils. They report a pupil’s performance on a much finer and more nuanced scale than levels. Scaled scores run from approximately 70-140, meaning that individual differences can be seen with much greater clarity. Second, the structure of tests like GL Assessment’s New Group Reading Test allows you to diagnose where a pupil’s difficulties might lie – for example, when she reads, is she struggling with vocabulary, or with decoding words?
Not only that, but a wealth of research shows that teacher assessment is often biased against disadvantaged pupils and those from ethnic minorities. This research can be difficult to accept, and the researchers carrying out such studies are at pains to point out that such bias is unconscious, and that it is not particular to teachers, but rather a feature of all human judgment. But the flaws in the way humans make judgments make it even more important for us to cross-reference that judgment with external checks like standardised tests.
In practice, I’ve found that teacher and test judgments do agree in the majority of cases. But then there are an interesting handful of cases where there is a discrepancy. Some of the time, this is because the student just had a bad day when they took the test. But in other cases, the test really has picked up on something interesting that the teacher might not have noticed. Often, the test has identified those two or three ‘hard to spot’ pupils in a year group - pupils whose strengths and weaknesses, for whatever reason, had proved difficult to identify.
Even if it’s only a small number of pupils, it’s surely worth the effort to make sure we stop them falling through the cracks.
Follow Daisy on Twitter @DaisyChristo