I love a challenge. Anybody that knows me knows that I strive for a challenge. Ever since I started my teacher training, I have always had it in my head that I wanted to be better. Teaching is a skill, Lord knows it’s a challenge, and I wanted to be better so that I in turn could help all of my students become better.
Over the past couple of months I’ve become increasingly obsessed with reading different educational blogs and following various teachers and bloggers on Twitter (more info to follow on these!). It’s been a fascinating process to read the views and experiences of others, to share best practice and to get a deeper understanding of the workings of other schools. One of the most striking (and kind of heart-warming) things I’ve encountered is the willingness for teachers to help each other at any cost. That’s one of the motivations behind starting this blog; to keep sharing best practice and helping other teachers out with the lessons I learn throughout my career.
Something that has been on my mind, particularly since delving into edu-twitter, is how we as a profession approach data collection from students and what we then do with that data. In my school, just as I’m sure happens in most schools, summative assessments are carried out at regular periods on all year groups throughout the academic year. Whether we love summative assessment or hate it, they do give us a set of data that we can do something with. But just how reliable is that data? How useful is it? In an education system with exam boards that critique over every single word in the paper, is the data actually helping us to help the students improve?
One of the things that the science department in my school has in place at the moment is a data analysis system for Year 11 mock exams. In December, and again in March, the Year 11s sit a mock paper in each of the sciences. One of the things that the subject specialists have to do before they sit the paper is go through it and pick out all of the individual topics that the paper covers and put them on a spreadsheet. This is so that when we are marking each of the papers, we input into the spreadsheet the number of marks that students gained for that particular topic. The mark that they got for each topic is compared against a ‘secure mark’ – basically what we would’ve liked them to get on the topic, and so this indentifies areas of weakness that we can use to implement targeted revision sessions.
I can fully see the benefits in this, and we’ve had a shed load of comments from parents about how amazing the process is because we can explicitly tell students where to focus their revision first, but I can’t help but wonder if we’re thinking about the process too laterally. YES they might’ve only scored 1 mark out of 7 on electromagnetism and magnetic flux density, but is it as simple as saying they didn’t know the topic content very well? Is focussing revision on those topics really going to help them in the long run? Or should they have focussed more on their algebra and being able to manipulate equations to get the 5 mark calculation correct?
My worry is that if we think too much about what our own data is potentially telling us, we’re going to miss the point with what exam boards actually want us to do. Summative data doesn’t necesarrily identify weaknesses. It could be that a particular student scored 4 out of 6 marks (secure) on the questions about internal energy and effect of temperature on gas pressure, but only because they got lucky with some of the wording that they put in. If I was to ask them different questions at a later date about the same topic, they wouldn’t necessarily do as well.
In short, I just don’t know. I’ll be the first to say that I love data and numbers, and love a good spreadsheet (who doesn’t?!), but there’s a niggling feeling in the back of my mind that my department is becoming too focussed on it. Data should be an aide, not a solution.