Languages Magazine

Robo-Graders

By Sdlong

I was wrong about the mechanization of student writing. I had assumed another year or two would pass before MOOCs began utilizing essay grading software. Turns out it’s happening now. EdX, founded by Harvard and probably the most prestigious online course program, has anounced that it will implement its own assessment software to grade student writing.

Marc Bousquet’s essay successfully mines the reasons why humanities profs are anxious about algorithmic scoring. The reality is, across many disciplines, the writing we ask our students to do is “already mechanized.” The five-paragraph essay, the research paper, the literature review . . . these are all written genres with well-defined parameters and expectations. And if you have parameters and expectations for a text, it’s quite easy to write algorithms to check whether the parameters were followed and the expectations met.

The only way to ensure that a written product cannot be machine graded is to ensure that it has ill-defined parameters and vague or subjective expectations. For example, the expectations for fiction and poetry are highly subjective—dependent, ultimately, on individual authors and the myriad reasons why people enjoy those authors. It might be possible to machine grade a Stephen King novel on its Stephen-King-ness (based on the expected style and form of a Stephen King novel), but otherwise, it will remain forever impossible to quantitatively ‘score’ novels qua novels or poems qua poems, and there’s no market for doing that anyway. Publishers will never replace their front-line readers and agents with robots who can differentiate good fiction from bad fiction.

However, when we talk about student writing in an academic context, we’re not talking about fiction or poetry. We’re talking about texts that are highly formulaic and designed to follow certain patterns, templates, and standardized rhetorical moves. This description might sound like fingernails on a chalkboard to some, but look, in the academic world, written standards and expectations are necessary to optimize for the clearest possible communication of ideas. The purpose of lower division writing requirements is to enculturate students into the various modes of written communication they are expected to follow as psychologists, historians, literary critics, or whatever.

Each discourse community, each discipline, has its own way of writing, but the differences aren’t anywhere near incommensurable (the major differences exist across the supra-disciplines: hard sciences, soft sciences, social sciences, humanities). No matter the discipline, however, there is a standard way that members of that discipline are expected to write and communicate—in other words, texts in academia will always need to conform to well-defined parameters and expectations. Don’t believe it? One of the most popular handbooks for student writers, They Say/I Say, is a hundred pages of templates. And they work.

So what’s my point? My point is that it’s very possible to machine-grade academic writing in a fair and useful way because academic writing by definition will have surface markers that can be checked with algorithms. Clearly, the one-size-fits-all software programs, like the ones ETS uses, are problematic and too general. Well, all that means is that any day now, a company will start offering essay-grading software tailor-made for your own university’s writing program, or psychology department, or history department, or Writing Across the Curriculum program, or whatever—software designed to score the kind of writing expected in those programs. Never bet against technology and free enterprise.

And that’s another major point—there’s not a market for robot readers at publishing firms, but there certainly is a market for software that can grade student writing. And wherever there’s a need or a want or some other exigence, technology will fill the void. The exigence in academia is that there are more students than ever and less money to pay for full-time faculty to teach these students. Of course, this state of affairs isn’t an exigence for the Ivy League, major state flagships, or other elite institutions—these campuses are not designed for the masses. The undergraduate population at Yale hasn’t changed since 1978. A few years ago, a generous alumnus announced his plans to fund an increase in MIT’s undergraduate body—by a whopping 250 students. Such institutions will continue to be what they are: boutique experiences for the future elite. I imagine that Human-Graded Writing will continue to be a mainstay at these boutique campuses, kind of like Grown Local stickers are a mainstay of Whole Foods.

For the vast majority of undergraduates—those at smaller state colleges, online universities, or those trying to graduate in 4 years by taking courses through EdX—machine-grading will be an inevitable reality. Why? It fulfills both exigencies I mentioned above. It allows colleges to cut costs while simultaneously making it easier to get more students in and out of the door. Instead of employing ten adjuncts or teaching associates to grade papers, you just need a single tenure-track professor who posts lectures and uploads essays with a few clicks.

So, the question for teachers of writing (the question for any professors who value writing in their courses) is not “How can we stop machine-grading from infiltrating the university?” It’s here. It’s available. Rather, the question should be, “How can we best use it?”

Off the top of my head . . .

Grammar, mechanics, and formatting. Unless we’re teaching ESL writing or remedial English, these aspects tend to get downplayed. I know I rarely talk about participial clauses or the accusative case. I overlook errors all the time, focusing instead on higher-order concerns—say, whether or not a secondary source was really put to use or just quoted to fill a requirement. However, I don’t think it’s a good thing that we overlook these errors. We do so because there are only so many minutes in a class or a meeting. With essay-grading software, we can bring sentence-level issues to students’ attention without taking time away from higher-order concerns.

Quicker response times for ESL students, and, perhaps, more detailed responses than a single instructor could provide, especially if she’s teaching half-a-dozen courses. Anyone who has tried to learn a second language knows that waiting a week or two for teacher feedback on your writing is a drag. In my German courses, I always wished I could get quick feedback on a certain turn of phrase or sentence construction, lest something wrong or awkward get imprinted in my developing grammar.

So, I guess my final point is that there are valid uses for essay-grading software, even for those of us teaching at institutions that won’t ever demand its use en masse. Rather than condemn it wholesale, we–and by we, I mean every college, program, professor, and lecturer–should figure out how to adapt to it and use it to our advantage.


Back to Featured Articles on Logo Paperblog

Magazines