QSA system encourages teacher cheating

I have been teaching science in Brisbane for 15 years, and I have worked in the state and independent sectors. A very important issue that I think needs to be raised about the current QSA assessment system is that it allows, and in fact fosters, teacher cheating.

There is tremendous pressure on teachers to ensure student work passes the moderation process without being lowered. Parents love good grades, and nothing looks worse for the school when A students are lowered to A- or B level. Most teachers are very honest, but some I have observed manipulate the system to make sure their work gets through the lottery of the moderation process without being savaged. These teachers are not motivated by self-interest, but actually self-preservation and fear of a moderation process that is byzantine and arbitrary. 

There are several strategies I have seen colleagues use:

 1. Coaching
In Queensland, teachers write and grade the tests. A common criticism fed back from review panels is that questions are not hard enough- which is used as a reason to move students down. What teachers do is put in very hard questions, but then coach the students in how to answer them. I have even seen teachers give students the test questions in advance to study. Obviously no mention of this is made to the panel. The panel sees really hard questions answered very well- what a great job that teacher must be doing! Unfortunately coaching the answers to these really tough questions is not necessarily building broad understanding of the important concepts. 

2. Bait and switch
Another way teachers make their submissions look better is to manipulate the test conditions. This can involve giving more time to the students, or setting the test as “open book” without mentioning this in the submission to panel. I once worked with a teacher who would let the students work on the test, with the help of their textbook and notebook, for as long as they needed to finish it. I even overheard him telling students how to answer the questions. That teacher was the review panel chair for our region. At first I was amazed at the results he was getting out of his students, until I realised how. When I challenged him over the issue he just laughed it off, telling me everybody did it and I was disadvantaging my students by not doing the same. 

3. Practice makes perfect
In Queensland, assignments (ERTs and EEIs) play a very important part in determining a students grade. It is impossible to determine how much help students have received from teachers and parents in completing these tasks. I know many teachers heavily edit student drafts in order to improve their standard. This is usually done quite openly, and in fact is usually encouraged by the school. As a result, how much of the final draft is the students own work?

 4. Panel  magic
For years I could never understand why my submissions would be moved at panel. The advice from year to year would often be contradictory, and I would spend hours trying to figure out how to do it properly. Finally I joined the panel, and suddenly my submissions sailed through without problems!. The review process is not anonymous, and when the other panelists know you they are reluctant to move your students. The review process is very subjective and a schools often get judged more on their reputation than on the student work. 

5. The Trojan
A panel submission does not contain a random sample of students. In fact, the samples sent off are selected by the teacher, with the intention that they represent the other students on the same level. What teachers will do is send of a really good example of a VHA, while giving much weaker students the same grade. These less deserving VHAs are never seen by the panel. Teachers, especially at private schools, are under pressure from parents for good grades. This way, borderline students are secretly given “the benefit of the doubt”.

 For the first 10 years of my career I worked in an fancy private school. The pressure for good grades was intense, and teachers and schools use every trick in the book to get the best for their students. 5 years ago I made the switch to the state system, and I am often amazed at how honest (to the point of being naive) the teachers are. There is still assessment fraud, but it is much more subtle and ad hoc. The reason for this is primarily because there is less pressure from parents for good grades. 

Assessment fraud is real, and there is very little the QSA assessment system does to prevent it. In fact, the combination of fear, pressure, confusion, and lack of oversight means the current system encourages teacher cheating. 

There is a very simple, in fact blindingly obvious solution, to the problem. External assessment.

QSA Issues

As the result of my experience of last year’s verification, which I describe below, I would like to suggest that the only certain way to be sure of fair and accurate assessment is to have external assessment, either whole or in part. I say this coming from a UK background where I taught A level physics for ten years completely with external assessment and enjoyed the freedom to be able to concentrate on teaching to a high standard without the constant stress and demand on my time of incessant setting and marking of assessments.

Let me share the nightmare I experienced over verification in 2011. I awarded my top two physics students VHA 8 & 7 respectively – these were not inflated grades, I had considered placing them higher. At verification both students were moved down to HA 3! Yes, ‘HA’ 3, a drop of 14 rungs!! I was in total shock, stunned, could not believe it! As you can imagine I then spent considerable time thoroughly reviewing the panel’s comments, my assessment instruments, my marking, everything – how could I have got it so wrong? My confidence as a competent teacher was severely shaken even with thirty years’ experience. Without going into all the details, after my review of the material I was confident (as confident as one can be in this vague, confused, contradictory system) that I was right in my original placement of these students. After a lengthy and detailed discussion of the instruments and students scripts, the panel chair agreed to reinstated them to VHA 6 & 5 – back up 13 rungs. At this point I decided discretion was the better part of valor, accepted my gains and did not point out that QSA instructs panel members not to drop students less than three rungs if they are going to be moved at all. So there we were – from VHA 8 to HA 3 and back to VHA 6! Incredible!

However, this was not the end of the nightmare! I teach the same two students in maths B and awarded them VHA 8 & 6. At verification the panel moved them down to HA 10 & 9 – on appeal they were reinstated to VHA 7 & 5!! It was the same dreadful, emotion draining mess, all over again.

SECTION REMOVED TO PROTECT PRIVACY BUT THE AUTHOR STATES THAT THESE WERE BRILLIANT STUDENTS WHO GOT LOTS OF AWARDS AND SCHOLARSHIPS

The point of bringing all this to your attention is to illustrate the gross failings of this cumbersome, time wasting system. These two examples are not isolated from what I hear from other colleagues. What would have happened to these two students had I not successfully been able to contest the panel’s decision? How could the system have got it so monumentally wrong? Every year one waits with apprehension for what the lottery of verification will return.

It would appear that the system of internal assessment, panel moderation and verification, much vaunted by QSA is, at best, one that is muddled, poorly managed, variable from region to region, and open to subjective interpretation. At worst, it is highly wasting of teachers’ valuable time, prone to gross inaccuracies and leads to a lowering of standards.

So, are there solutions?  The notion of using criteria has some merit and is not the problem in and of itself. It is a good thing to assess criteria that are central to being able ‘to do’ physics, chemistry, maths b, etc.. As I see it, there are two central problems and a number of peripheral issues:

The first central problem is that of writing assessment instruments: the difficulty and inordinate amount of time needed to write suitable quality assessment instruments that adequately assess the breadth and depth of the required criteria to the satisfaction of QSA makes the job daunting to say the least.

The second problem is that of marking: of interpreting whether or not students’ responses meet such and such a criteria to such and such a standard. This is primarily because, by their very nature, criteria statements, at best, are not precise and are open to subjective interpretation by teachers and panel alike. At worst they are unclear, confusing and very difficult to use accurately. Again, a huge amount of time is taken in trying to do the job as accurately as possible.

Peripheral issues include situations such as:

If a student fails to attempt a question on a certain criteria altogether what grade should he be awarded – cannot be an E as he has not met even that criteria.

Or what if a student has a set of marks for, let’s say the Knowledge and conceptual understanding criteria, such as:

Out of four C grade questions he got one completely correct, the other three were nonsense or not attempted and awarded no grade.

Out of three B grade questions two were completely correct, the other not attempted

Out of two A grade questions one was done poorly and awarded C, the other was not attempted.

How do you award an overall grade for that combination? In the old days the marks would simply be totaled, but what does one do with that variety of grades?

Another issue with using criteria/letter grading is how the final verification grade given?

If criteria grades are awarded on an A, B, C basis i.e. the student has or has not met the standard for an A grade criteria, how can a student be considered an VHA 6 compared to a VHA 3 say. Again we are back to vague, subjective, uncertain decisions.

In the light of all this I propose that the only way to overcome all these uncertainties is to use external assessment where instruments are written by people who have the necessary amount of time, who have had adequate training and experience in writing instruments that properly assess the required criteria. Similarly, marking should be done by people who have had adequate training in understanding and interpreting criteria, who will bring a consistency and fairness to all students’ results.

In this way it could be hoped to avoid the huge range of interpretations of criteria and marking procedures that one hears regularly from colleagues in different regions. The constant re-inventing of the wheel from school to school would be avoided with associated waste of time; and most of all teachers would be freed up to do what they do best – teach.