This is the time of the year when both good practice and legislative requirements have schools focusing on their achievement and other student outcomes data. We are looking to see where we have made the biggest impacts so we can celebrate this. We are also looking to see if there are groups of students, or areas of our programmes, or parts of our school that are not making the progress we had hoped for. School leadership teams often spend a lot of time crunching numbers and making the huge mass of collated information meaningful and enable colleagues, staff, teachers, Boards, and sometimes even students to make sense of it. The question is: Are we focussing on the right things?
When we are looking at data we need to consider some important factors
Student Management Systems (SMSs) are making the analysis process a bit simpler if they are used effectively, and if the information being stored and collated is numerical. But, often SMSs are simply repositories for demographic information and a place to keep test scores, NCEA data, and Overall Teacher Judgments (OTJs). A basic understanding of spreadsheets certainly helps make sense of data on scale, and means different theories can be explored and views of data used to try and understand what it really means.
Being “data literate” also means being able to choose appropriate presentation formats for the kinds of data being shared. A table may show everything, but can be confusing and significant things can be lost. A graph is good for showing differences, but attention should be paid to using an appropriate scale, for instance.
Lack of experience with data analysis can lead to making incorrect assumptions. For example: the assumption that a numerical difference is a significant one. By this, I mean, ensuring that the differences in the numbers could not simply be attributed to chance. If numbers are small then the so-called margin of error can be quite large. Think about political polls, for example, that often quote a margin of error of plus or minus something like 3.5%. This means that the actual results could be expected to vary by up to 3.5% bigger or smaller. This is with surveys of over 1000 people. In the school context, we may well be talking about samples of less than 10% of this size. With a sample of 50 or less, the variance of 20-30% in scores could, in fact, be expected simply by chance. This is particularly true if the confidence we have in the accuracy of the measure being used is not that high.
In the school context, we also know that comparing one year group with another is not comparing like with like. Different year groups can have completely different compositions and the students can vary wildly in their engagement, confidence, and ability in different components of the curriculum we may be assessing and tracking.
Positioning assessment data in the decision-making process
‘Data-driven practice’ and ‘data-informed decision-making’ have become real buzzwords in recent times. Both these things require consideration of the factors outlined above. They also require that we position assessment data in a way where it is not the sole determining factor in what we do.
In the same way that good assessment practice means a single-test score is not the only indicator of an OTJ, analysis of OTJ data is not the only indicator of schools achieving successful outcomes for their students. Or, indeed, of teachers being successful in their settings either. Any assessment should be a point-in-time litmus test of the outcomes being aimed for, not the only criteria. Effective schools and individual educators know a lot more about their students collectively and individually than can ever be captured in a single number, or set of numbers. Student outcomes over time are not always well represented on a graph.
I like to think of the things we can put the number on and, therefore, ultimately turn into some sort of graph or table as the proverbial tip of the iceberg. There are so many other things that make up student achievement, outcomes, and success that are ‘below the surface’, but nonetheless hugely significant:
These factors below the waterline are things that the ‘tip of the iceberg’ factors can point to, but often the link may not be a very strong one. They may also be things that the whānau or culture that your students (or a group of students) come from are valued more highly than those above.
As a parent, I am way more proud of my own kids being good people than I am of any of their academic outcomes. I would think many families take a similar perspective.
So, I guess my challenge in this blog post is to consider several different things. As we begin bringing our focused attention onto the year’s data and information to begin making decisions about where we need to focus for next year in our programmes and improvement efforts:
- Are we examining data in an appropriate way?
- Are we reading too much into the numbers?
- Do the numbers show what we are claiming they do?
- Are the important things captured in the numbers, or, are there other key things that cannot be shown by numbers alone?
- Are we using the best data and information that you can in your decision-making processes?
- Are the conclusions we are drawing true for all students and groups of students?
If you would like support thinking about these things more deeply, and/or planning your PLD response to what you have found, do contact us at CORE Education.
Latest posts by Greg Carroll (see all)
- Building Collaboration — chicken or egg? - November 16, 2017
- Framing a powerful Professional Learning response… - February 14, 2017
- The iceberg of outcomes - November 22, 2016