Following the publication of Databusting for Schools in July, I've continued to travel the country raising awareness and understanding of data in education. I held a Masterclass at the EduTech show at Olympia in London, which was titled 'An Insiders Guide to the Numbers in School'. This focused on the use of standardised scores, and the understanding of the mathematics behind these incredibly useful statistics. I also spoke at the School Data Conference, leading a workshop on effectively analysing and interpreting data within a primary school, and led a session for North Lincolnshire Primary Heads Consortium.
Exploring standardised scores, which I covered in depth in Databusting for Schools, has been well received at conferences; the presentation I did at researchEd's national conference ('Assessment 101 – Ten things everyone should know about assessing children') will be repeated at researchEd Durham later this month (details here), and I've written a blog for CEM which was published this week.
The piece for CEM highlights the insight which standardised scores can give you over and above raw test scores:
"If a student scores 65% on a test, what does this tell you? Is this mark good? Bad? Average? If it is deemed to be a good/bad/average mark, against whom is this judgement being made – the other children in a class, in a school, or similar children across the country?
These fairly obvious questions are what led to the development of Standardised Scores; numbers which not only tell you how a child performed in a test, but also give you some information as to where their score sits within the range of scores recorded by other children who have taken the same test.
So, if a child scored 65% on a test in which the average child scored 70%, their score might be reported as a standardised score of ‘95’; if the average child scored 60%, their score might be reported as ‘105’.
If you know that standardised scores are created such that the mean score is allocated a score of 100, that two in three standardised scores are between 85 and 115, and that 95% of scores are between 70 and 130, you can make much more sense of a child’s test score reported as a standardised score than you can from a test result reported as a percentage or a raw score."
With the growing move to understand the benefits and limitations of data in education, including extremely useful insights into the national picture from Ofsted's Towards the education inspection framework 2019 and the DfE Teacher Workload Advisory Group's Making Data Work report.
I'm continuing to run sessions on the current pupil performance data landscape, looking at the recent history and future direction of the use of numerical data in schools.
Please get in touch if you have any comments, feedback or requests for further information.