It’s official: No tool can reliably spot AI-written text, says OpenAI after abandoning their project

Could you confidently spot a text written by AI versus a human? Well, as of now, not even OpenAI—the creator of ChatGPT—can do it reliably. Their latest announcement brings a reality check for all who hoped technology might police itself, rather than rely on our own common sense (or, possibly, our panic button).

AI That Fooled the World—and Its Creators

When ChatGPT burst onto the scene in November 2022, it didn’t just shake up how we work, learn, or compose desperate emails at midnight. It sent tremors through governments, regulatory agencies, the academic sphere, schools, and pretty much anywhere someone might want to pretend they read a book. Naturally, worries surfaced: Would AI-generated text invade classrooms, governments, or the airwaves, masquerading as earnest, painstaking human work?

But here’s the twist: despite the mounting concerns and attempts to regulate, society is still struggling to keep the genie in the bottle. Mastering—or even just effectively policing—the use of massive language models like ChatGPT has proven to be a challenge with no easy fix. For those hoping for a technological leash, the future remains uncertain.

OpenAI’s ‘AI Text Classifier’: Ambition Meets Reality

In late January 2023, OpenAI took its own shot at solving the issue. Enter the “OpenAI AI Text Classifier”: a tool designed to differentiate between text written by humans and that conjured up by artificial intelligence. At its launch, OpenAI’s spokesperson promised this was only the first step, inviting users to provide feedback and expressing hopes to deliver improved methods down the line.

To say the results were humbling would be polite. Throughout January, the classifier limped along with accuracy of about 26%—in other words, don’t wager your academic integrity, your job, or your TikTok reputation on its judgments. OpenAI argued that with other techniques alongside, better results might be possible. But no miracle arrived.

The Classifier’s ingenious system did not offer a firm yes or no; instead, it predicted a probability scale:

  • Very unlikely
  • Unlikely
  • Uncertain
  • Possible
  • Likely

Not exactly Sherlock Holmes—and, more critically, a system that not only struggled to spot AI-generated writing, but at times accused honest humans of not being human enough.

Why the Project Was Shelved: When Accuracy Fails

The main pitfall from day one? The unsettling tendency to mislabel human-authored content as AI creation—a digital false positive that no writer wants to see (except maybe your rival in a poetry contest). Yet the fatal flaw, the final nail in the classifier’s coffin, was its inability to improve its own reliability. With accuracy flatlining—far too low to instill confidence—the decision was taken: the project was officially abandoned on July 20.

Is this an admission that today’s technology simply can’t do the job? Or was it a matter of insufficient willpower, or misallocated resources—even from the very company that unleashed this digital powerhouse on the world? That question lingers, with all the philosophical ambiguity of a late-night college debate.

The Next Steps—and Why They Matter

OpenAI says it isn’t stopping entirely. According to their own statement, efforts are ongoing to integrate user feedback and search out more effective provenance techniques for text. The company is also striving to develop mechanisms allowing users to discern whether audio or visual content was AI-generated. But for now, these promises remain in the research pipeline.

Meanwhile, the world keeps wondering: How do we regulate or regulate-away the challenges of AI? New contenders like Google’s Gemini join the fray, and, rather ironically, even the very organization that triggered this AI revolution still does not provide the ultimate controls to tame it.

If you’re looking for a sure-fire way to catch AI imposters in the act, brace yourself for disappointment—at least, for now. Stay tuned, stay curious, and maybe, until the tech catches up, trust your own detective instincts just a little bit more than the machines!

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.