By Lisa Godfrees
We’re writers. At some point in our careers, we entered a writing contest.
Some of us found encouragement there. Maybe we received kind remarks from a judge that gave us courage to continue. Maybe we semi-finaled, finaled, or won. Maybe an agent contacted us because they were impressed with our entry.
Some of us came away discouraged. Maybe a judge was particularly harsh. Maybe our scores were low. Maybe we weren’t ready to receive criticism. Maybe we decided to give up on writing…or on contests.
But I think we all share one thing in common-confusion over what to make of our scores. We enter contests because we want feedback on our writing. Sometimes we receive written comments, sometimes only scores.
For the first writing contest I entered, one judge gave me 99.9, another 88, and the third 59.5. I equate that to an A+, B+, and F. Pretty large spread. What did it mean? Was one judge too easy? Was one judge having a bad day? What can we garner from disparate results such as these?
Since then, I’ve had the opportunity to coordinate two contests for our local ACFW chapter. Being on the other side of a writing contest provides new perspective…and data. Last year, we hosted a contest where experienced judges (published authors, editors, publishers) graded each entry from 1-10 in fourteen separate categories. Because I love-love-love data, I analyzed the variability in scoring per story and per judge. Here are my general impressions:
• Stories with the highest average score showed the least variability (5-10%) between scores. As the average score dropped, the variability between scores increased (up to 40%).
What this means: Judges have an easier job agreeing on good writing. The better the submission, the closer the consensus between judges.
• Seven judges (#1, 2, 17, 30, 31, 34, 35 in chart below) awarded different entries very similar scores. The remaining judges showed a much wider variation in their story-to-story scoring.
What this means: How you place in a contest depends to some extent on your judges. Some judges score everything consistently. In our contest, two judges (#1 and #2) scored consistently low and two (judges #34 and #35) scored consistently high.
• Judges are unlikely to score any category below 4 on a 10-point scale. In the chart below, I counted each time a score was given for any entry in any category. A score of 1 was given once, while a score of 10 was awarded almost 350 times.
What this means: Much like with employee evaluations, no one wants to use the bottom end of the scale. For this reason, a 10-point scale is better than a 5-point scale for contests because it allows judges to more easily differentiate between entries when scoring.
For contest entrants: Judging is subjective and therefore prone to variability. If you receive scores that are over a broad range (like mine), you are not alone. What it means is that you’re headed in the right direction but you’re not quite there yet. Be encouraged by the high scores and motivated to work harder by the low ones.
For contest judges: Take advantage of the full point scale and make sure to stick as closely to the grading rubric as possible. If you have to score someone low, explain why to take the sting out of it. Instead of giving high scores as a means of encouragement, grade fairly and use your comments to encourage.
For contest coordinators: The 10-point scale is your friend and so is the contest data. Look for judges who score consistently too high or too low and consider not using them in the future. Give clear instructions on your grading rubric to make judging less subjective.
If you’re a data junkie and what to see more charts and graphs, contact me. I’m happy to share. 😉
LISA GODFREES, self-proclaimed data-junkie, worked over a decade in a crime lab as both a DNA analyst and compliance manager. Tired of technical writing, she hung up her lab coat to pen speculative fiction while taking classes at Dallas Theological Seminary. Her first manuscript was a 2013 Genesis finalist.