ChatGPT can certainly spit out an essay in time for a deadline—but, reassuringly for those of us who write for a living, the results lack a crucial human quality. Researchers at the University of East Anglia compared 145 essays by second-year British university students with 145 churned out by the AI, and found the students’ writing “significantly richer”.
The students’ essay topics were all over the show: fox hunting, beef eating, the parliamentary system, Britain’s split from the European Union, transport, computers, the lottery. The researchers prompted ChatGPT to produce similar pieces, then scrutinised them for quirks such as personal asides, using words such as “we” or “our”, and asking questions—all devices that help hook readers in. This line from a student, for example, was deemed powerfully engaging: “We ought to ask ourselves ‘What happens when the computer-orientated world collapses?’ We would then have to use our brains.”
Overall, ChatGPT was a flop on engagement.
“The program was unable to mirror the engaging tone of the student texts,” the researchers reported, not without a note of relief.
