Jump to content

OpenAI’s AI-generated text detector is never technically wrong, but it’s still easy to trick


NelsonG

Recommended Posts

The words ChatGPT selected on a computer screen.

The world’s most famous chatbot, ChatGPT, was released in late November of last year. The immediate response was astonishment, followed almost immediately by terror about its ramifications — most notably that it might generate school essays for dishonest kids. Yesterday, almost exactly two months later, OpenAI, ChatGPT’s parent company released what many users hope will be the antidote to the poison. 

OpenAI’s "classifier for indicating AI-written text" is the company’s latest invention, and it’s as easy-to-use as one could want: Copy-paste text into the box, click "Submit," and get your result. But if you’re expecting a straight answer, you’re going to be disappointed. Instead, it assigns the text one of a range of classifications, from "very unlikely" to be AI-generated, to "unlikely," "unclear," "possibly," or "likely AI-generated."

In other words, it’s like one of those frustrating conversations with your doctor; you will never get a straight answer, so your doctor will never be technically wrong. 

Thankfully, OpenAI is not hiding the classifier's unreliability. "Our classifier is not fully reliable," the intro page for the tool says. Provided with what it calls a "challenge set" of texts, we’re told it gave false positives 9 percent of the time. 

In Mashable’s tests, however, it was even less reliable than that in certain limited contexts. In deliberately challenging scenarios, it produced false results in both directions — saying it was unlikely that AI-written text was AI-written, and that human-written text was likely AI-written — almost as often as not. The difference, perhaps, is that we were trying to trick it. 

Here’s how things shook out:

It doesn’t mistake the classics for ChatGPT AI outputs

First the good news: This tool does the bare minimum pretty well by not mislabeling the masterworks of the English language as AI outputs. When we put in snippets of Shakespeare, the King James Bible, Ulysses, and others, it was 100-percent effective. It labeled them all "very unlikely" to be AI-written.

This snippet from Tennyson’s "The Lady of Shallott" for instance, was classified appropriately. 

A snippet from Tennyson, correctly labeled
Credit: OpenAI / Screengrab

The lesson: This machine might not notice that it has been fed a masterpiece of surpassing beauty and depth, but at least it doesn’t say, "I think a robot could have written this."

None of its other results are very impressive

Immediately after that test, we asked ChatGPT to create a Tennyson poem about King Arthur at Comic Con. Here's a sample of what it generated:

King Arthur and his knights, so brave and bold,

At Comic Con, their tales of glory told.

Walking 'round the floor, in cosplay gear,

Their swords and shields, no longer mere.

The crowds all cheered, as they walked by,

With Merlin close behind, his magic nigh.

Sir Lancelot, Sir Gawain, Bors the Brave,

Their quest for glory, they'll never waive.

Not bad for a robot, but still extremely bad ("no longer mere"???). When we fed this lyrical ballad into the classifier, we expected it to easily outsmart us, forcing us to dive a little deeper into our bag of tricks. Nope:

An AI poem, incorrectly labeled
Credit: OpenAI / Screengrab

For what it's worth, it didn't classify this doggerel as "very unlikely," just "unlikely." Still, it left us a little uneasy. After all, we hadn't tried very hard to trick it, and it worked.

Our tests suggest it might bust innocent kids for cheating

School essays are where the rubber meets the road with today's malicious uses of AI-generated text. So we created our best attempt at a no-frills five-paragraph essay with dull-as-dishwater prose and content (Thesis: "Dogs are better than cats."). We figured no actual kid could possibly be this dull, but the classifier caught on anyway:

A human-written essay, correctly labeled
Sorry but yes, a human wrote this. Credit: OpenAI / Screengrab

And when ChatGPT tackled the same prompt, the classifier was — at first — still on target:

An AI-generated essay, correctly labeleled
Credit: OpenAI / Screengrab

And this is what the system looks like when it truly works as advertised. This is a school-style essay, written by a machine, and OpenAI's tool for catching such "AI plagiarism" caught it successfully. Unfortunately, it immediately failed when we gave it a more ambiguous text.

For our next test, we manually wrote another five-paragraph essay, but we included some of OpenAI's writing crutches, like starting the body paragraphs with simple words like "first" and "second," and using the admittedly robotic phrase "in conclusion." But the rest was a freshly-written essay about the virtues of toaster ovens.

Once again, the classification was inaccurate:

An AI-written essay, classified appropriately.
Credit: OpenAI / Screengrab

It's admittedly one of the dullest essays of all time, but a human wrote the whole thing, and OpenAI says it suspects otherwise. This is the most troubling result of all, since one can easily imagine some high school student getting busted by a teacher despite not breaking any rules.

Our tests were unscientific, our sample size was minuscule, and we were absolutely trying to trick the computer. Still, getting it to spit out a perversely wrong result was way too easy. We learned enough from our time using this tool to say confidently that teachers absolutely should not use OpenAI’s "classifier for indicating AI-written text" as a system for finding cheaters.

In conclusion, we ran this very article through the classifier. That result was perfectly accurate:

An article, correctly classified
Credit: OpenAI / Screengrab

...Or was it????

View the full article

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Our picks

    • Wait, Burning Man is going online-only? What does that even look like?
      You could have been forgiven for missing the announcement that actual physical Burning Man has been canceled for this year, if not next. Firstly, the nonprofit Burning Man organization, known affectionately to insiders as the Borg, posted it after 5 p.m. PT Friday. That, even in the COVID-19 era, is the traditional time to push out news when you don't want much media attention. 
      But secondly, you may have missed its cancellation because the Borg is being careful not to use the C-word. The announcement was neutrally titled "The Burning Man Multiverse in 2020." Even as it offers refunds to early ticket buyers, considers layoffs and other belt-tightening measures, and can't even commit to a physical event in 2021, the Borg is making lemonade by focusing on an online-only version of Black Rock City this coming August.    Read more...
      More about Burning Man, Tech, Web Culture, and Live EventsView the full article
      • 0 replies
    • Post in What Are You Listening To?
      Post in What Are You Listening To?
    • Post in What Are You Listening To?
      Post in What Are You Listening To?
    • Post in What Are You Listening To?
      Post in What Are You Listening To?
    • Post in What Are You Listening To?
      Post in What Are You Listening To?
×
×
  • Create New...