.jpg)
AI detection: useful or unreliable? The facts at a glance
BlogpostsListen to this article as a podcast:
As the founder of the first AI-supported copywriting agency in the Netherlands, I experience daily how much AI-generated content sparks discussion. Clients often ask: Can tools like Originality.ai or ZeroGPT detect whether AI has written a text? If so, how reliable is it?
There is also concern that Google might penalise AI-generated content in search rankings. Questions arise about the quality and authenticity of AI-generated texts. In this article, I will explore these questions and concerns based on my experience at CopyRobin.
Through research, practical examples, and expert opinions, I will demonstrate why AI is not the enemy of good copy and why we should be sceptical of AI detection tools hype.
The Promise and Reality of AI Detection Tools
The rise of popular AI detection tools such as Originality.ai and ZeroGPT gives the impression that determining whether a text was written by a human or AI is straightforward. These tools often claim impressive accuracy (sometimes as high as 98%+ according to their own claims). In practice, however, this promise often falls short.
Take ZeroGPT, a free online detector used by many. In an independent test conducted by the University of Kansas, ZeroGPT correctly identified only 35% to 65% of AI-generated texts—barely better than a random guess.
Even the creators of the most advanced AI admitted that their detector was not reliable enough…
OpenAI’s AI Text Classifier, designed to recognise ChatGPT-generated texts, was swiftly discontinued due to a 'low accuracy rate'. In other words, even the developers of the most sophisticated AI admitted that their detector was unreliable…
Originality.ai—a paid tool many marketers and copywriting agencies use—claims to achieve very high accuracy. However, independent reviews paint a different picture. Tech writer Yash Chawla tested Originality.ai and found that it classified his own (100% human-written) text as 65-71% AI-generated.
This is a classic example of a false positive: original human content mistakenly identified as AI-generated. Chawla concludes that Originality.ai “identifies content as AI even when it is not.” Such errors are not isolated incidents; the internet is full of anecdotes from writers falsely accused because a detector flagged their work as AI-generated.
Weaknesses and Misconceptions
Why do these detection tools fail so often? Firstly, they rely on recognising patterns and statistical characteristics in language, not magic. Often, they measure how ‘predictable’ a text is.
AI models like ChatGPT produce grammatically correct and sometimes highly formulaic text—well-structured and predictable. But let’s be honest: many human-written texts (such as formal reports or student essays) are also predictable in tone and structure.
A detector can incorrectly classify such well-written text as AI-generated. Even the developers of Originality.ai acknowledge that short or highly streamlined texts and those covering widely discussed topics are more likely to be flagged as false positives.
It’s a cat-and-mouse game: as detection algorithms improve, so do AI models and optimisation techniques.
A second major weakness is that these tools are relatively easy to bypass. Minor modifications to AI-generated text—using synonyms, changing sentence order, or applying human editing—can blur AI’s ‘fingerprint’.
There are even ‘AI-humanisers’ specifically designed to make AI text less detectable. It’s a cat-and-mouse game: as detection algorithms improve, so do AI models and countermeasures. This means that an AI detector can never be foolproof.
Nonetheless, there are persistent misconceptions. One is that a detector functions like a plagiarism checker. Clients see a percentage like “85% AI-generated” and sometimes interpret it as the proportion of copied text—but that’s not how it works.
That percentage is an uncertainty estimate from the tool, not legal proof. Another misconception is that these tools have some secret access to GPT’s ‘memory’ or dataset. In reality, they only analyse the provided text for probability and complexity. Understanding this is crucial to avoid treating such scan results as definitive.
At CopyRobin, we have encountered cases where a client ran our delivered text through a detection tool and became concerned about the AI percentage in the results. We explain how that score is determined and that human copywriters review and refine all our texts.
We have tested extensively ourselves and concluded that the same text can receive vastly different scores across different detectors—a clear sign that this is not an exact science.
Google’s Stance on AI-Generated Content
Perhaps the biggest concern among clients is: What does Google think about this? In the past, there was a belief that AI-generated content was ‘forbidden’ by Google. This originated from older guidelines against automatically generated, spammy content. However, Google’s stance has become clearer and more nuanced in recent times.
In February 2023, Google published an update stating: “Appropriate use of AI or automation is not against our guidelines.”
In other words, using AI is not against the rules as long as it is implemented appropriately. The key lies in the word ‘appropriate’. Google further clarified that it has been combating low-quality content, whether human- or AI-generated, for years. Their primary focus is on quality and intent, not the production method.
In fact, Google adjusted its search guidelines: where it previously said “content written by people”, it now says “content created for people”. This is telling—it’s about who the content serves (the reader), not who or what writes it.
“Our focus on the quality of content, rather than how content is produced, has helped us deliver reliable, high-quality results to users for years.” - Google
Google’s Search Liaison, Danny Sullivan, and other representatives have emphasised repeatedly that high-quality, useful content is the determining factor. To quote a Google spokesperson: “Our focus on the quality of content, rather than how content is produced, has helped us deliver reliable, high-quality results to users for years.”
In plain English: If your content is valuable to the reader, Google will reward it—regardless of whether a human, AI, or a combination of both wrote it.
Is AI Use Detrimental to SEO?
In short, no, not inherently. AI-generated content is not harmful to SEO as long as it meets the criteria of relevance, originality, and user value. Google only penalises AI-generated text if it is purely designed to manipulate search rankings (such as bulk-producing meaningless, keyword-stuffed content—something already against Google’s guidelines before AI).
John Mueller of Google once called nonsensical auto-generated content ‘spam’, but Google’s current guidelines make it clear that useful AI-assisted content remains acceptable. Google wants to promote high-quality content, regardless of how it is created, as SEO expert Taylor Scher explains succinctly. Google essentially says that we don’t care if it’s human or AI as long as it helps users.
Does this mean AI always makes SEO easier? Not necessarily. It’s entirely possible to produce bad content using AI—and poor content performs poorly in SEO, regardless of whether it’s written by an intern or ChatGPT.
For example, AI tends to generate ‘safe’, generic-sounding text. If everyone in your industry starts using AI without adding human creativity, you’ll end up with generic, uninspiring website content. Such mass-produced mediocrity may be flagged by Google’s Helpful Content Update, which assesses whether content is genuinely useful and distinctive or simply automated and generic.
At CopyRobin, we use AI as a tool for our copywriters—not as a replacement.
It all comes down to how you use AI. At CopyRobin, we use AI as a tool for our copywriters—not as a replacement. AI can help with initial drafts or routine text sections, after which an experienced writer refines it with their expertise and E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness).
That human experience (“Have I experienced this first-hand or researched it thoroughly?”) is something AI cannot yet replicate. SEO experts point out that the ‘Experience’ factor—the extra E in Google’s E-E-A-T guidelines—is something AI-generated text often lacks. So, ensure your content includes personal experiences, examples, or insights that a generative model cannot provide. That is human work.
SEO conclusion: AI is not a shortcut, but it’s also not an obstacle. It is one of many tools in a good SEO copywriter’s toolbox. Those who use it wisely can create quality content faster, which ranks just as well (or even better) than purely human-written content.
However, those who misuse AI as a cheap content factory will quickly learn that low quality leads to low results.
Fears and Assumptions About AI-Generated Texts
Despite Google’s reassurances, I still notice that many business clients have certain fears and assumptions about AI-generated texts. Below are some of the most common ones I hear—and why they are largely unfounded:
-
“Google will penalise us if we publish AI-generated content.” This is perhaps the biggest fear. Fortunately, this myth has now been debunked by Google itself. As mentioned earlier, Google only penalises content that violates its quality guidelines (spam, deception). AI itself is not a red flag.
Google explicitly states that AI-generated content is allowed as long as it is written for people and provides value. In short: an informative blog post that was (partially) created with AI assistance but offers valuable insights is no more likely to be penalised than a fully human-written piece.
At CopyRobin, we have published countless AI-supported texts that rank well and have never been penalised by Google.
-
“AI-generated texts are lower in quality, and my readers will notice immediately.” This is a common assumption, but in reality, well-utilised AI is often indistinguishable from a human writer. Modern AI language models can construct naturally flowing sentences.
Quality content is about structure, clarity, factual accuracy, and relevance—all of which can be achieved using AI when given the right input. Our experience is that clients are often pleasantly surprised at how well an AI assistant can generate an initial draft, after which our copywriters refine it further.
A real-world example: A client required weekly blog articles. By using AI for research and an initial outline, we were able to maintain a consistent tone of voice and always deliver on time—with quality the client described as “indistinguishable from human writing.”
The jury of the Dutch Interactive Awards confirmed this when they praised CopyRobin for combining AI technology with human creativity to deliver high-quality content.
-
“AI-generated texts lack authenticity and creativity—the human touch.” This concern is understandable: a robot has no emotions or unique perspective. However, AI is only as creative as the prompt it receives. The direction and guidance come from a human.
A skilled copywriter can use AI to explore different angles and gain inspiration before adding a unique insight or engaging anecdote from their own experience to bring the text to life. Authenticity ultimately comes from brand voice and experience. AI accelerates the writing process, giving writers more time to add that personal touch.
At CopyRobin, we see AI not as a replacement for creativity but as a tool that frees up time for creative input. Our copywriters can include personal examples, humour, and an original spin that maintains the client’s brand voice.
-
“AI produces inaccurate information, which is risky.” It’s true that AI models sometimes make errors or ‘hallucinate’—in other words, they confidently present false information. We mitigate this risk through human fact-checking. A professional content agency using AI would never publish unverified AI output directly.
At CopyRobin, we fact-check all data, links, and figures in AI-generated texts. AI can, for instance, produce a solid first draft of a product description, but a human must verify that all product specifications are correct. There is no risk of blunders when AI-generated texts are carefully edited and checked.
On the contrary, AI allows us to quickly process and structure vast amounts of information, after which an experienced writer fine-tunes the details.
-
“AI detection tools will expose our AI-generated text, and that will be a problem.” This is a new concern that has emerged since tools like Originality.ai became available. A client once told me he was worried that competitors would run our content through such a tool and then claim, “See? They use AI!”
My response: So what? First, as discussed, these tools are unreliable. They can even incorrectly label 100% human-written texts as AI, so why should we take their judgement as absolute? Second, even if a text is (partially) AI-generated, that’s not inherently a problem. We are doing nothing illegal or against the guidelines.
Ultimately, the only thing that matters is that the audience finds the content valuable. No end reader is going to use Originality.ai to check whether a sentence was typed by a human or a machine—they are there to get answers to their questions or learn something new. And if Google is satisfied (see above) and your readers are satisfied, then a score from ZeroGPT is irrelevant.
For peace of mind, we sometimes run texts through these detectors before sending them to clients, just to see what happens. In cases where a detector flags a carefully human-edited text as ‘highly AI-generated’, that simply proves that the tool is flawed, not the content.
-
“Using AI for content writing is cheating—I’m paying for a human copywriter!” This sentiment may not always be vocalised, but it lingers beneath the surface. The idea that AI does the work while the client still pays full price can feel uncomfortable. I believe in transparency.
At CopyRobin, we offer different service levels. Clients can choose between fully human copywriting or AI-assisted copywriting (at a lower cost or with faster turnaround times). In all cases, a human plays a key role in the final product.
Think of AI as a personal assistant for the copywriter: it can rapidly organise information and draft a preliminary version, but the copywriter is the director who ensures the text aligns with the client’s brand identity and meets the desired quality standards.
Thanks to AI, we can deliver faster and offer better value for money without compromising quality—so the client ultimately benefits. It would only be ‘cheating’ if an agency presented AI-generated text as purely human craftsmanship when no human effort had been involved.
We believe in openness: AI is an integral part of the future of copywriting, and by using it smartly alongside human expertise, we deliver the best possible value.
More and more marketing and copywriting agencies are integrating AI tools into their workflow.
Trends and Expert Opinions on AI and Text Quality
The discussion around AI in content creation is dynamic and ongoing. Here are some recent trends and insights:
1. From scepticism to adoption in the industry: While content professionals were once wary of AI, we now see a clear shift. More and more marketing and copywriting agencies are integrating AI tools into their workflow. CopyRobin was an early adopter of this trend and won the AI Supplier Award in 2024 for our AI-driven approach.
AI assistants like Jasper, ChatGPT, and Copy.ai are becoming commonplace among writers. The general sentiment is shifting, too: AI is here to stay—use it wisely. Companies like BuzzFeed, Reuters, and various blog platforms openly experiment with AI-generated content. This mainstreaming of AI forces us to rethink quality standards and best practices.
2. Experts emphasise quality & balance: Many SEO and content experts conclude that balance is key. Roger Montti (Search Engine Journal) states that AI-generated text alone cannot meet all of Google’s quality requirements (especially the ‘experience’ element). Still, he also acknowledges that AI can be useful as part of the process.
Others, like the creators of SurferSEO, argue: “AI is not the devil, nor the end of SEO—it’s just a tool, nothing more, nothing less.” Essentially, both perspectives agree on using AI consciously. Add human insights where possible and let AI handle the heavy lifting (like data processing or first drafts).
3. Google’s continuous updates: Google itself is constantly evolving. After the Helpful Content Update (which aimed to identify content created for search engines rather than people), Google introduced multiple core updates in 2023 and 2024. Some in the SEO community suspect that sites with unfiltered AI mass-produced content were affected. However, proving a direct cause-and-effect relationship is difficult since Google’s algorithms evaluate hundreds of factors.
The trend is that Google rewards quality through E-E-A-T and other signals while downgrading mediocrity. AI content must meet these higher standards.
What is clear is that Google is getting better at recognising quality signals. If AI-generated content is all written in the same generic style and lacks depth or unique information, ranking high will be difficult—not because a detector flags it as ‘AI’ but because competitors will likely produce better (potentially human-refined) content. The trend is that Google rewards quality through E-E-A-T and other signals while downgrading mediocrity. AI content must meet these higher standards.
4. Debate over detection tools and policy: The debate over AI detection software is intense in other sectors (such as education). Universities are struggling with whether to use tools like Turnitin’s AI detector or GPTZero to prevent cheating. However, with a growing number of false positives (cases of students wrongly accused), criticism of these tools is increasing.
A Bloomberg analysis found that even with a false positive rate of just 1-2%, hundreds of thousands of essays and thousands of innocent texts could be falsely flagged as suspicious. This debate is spilling over into content marketing: more and more people are realising that you can never be 100% sure whether a text was written by a human or AI based on a detection tool alone.
Even OpenAI confirmed this when they took their own detector offline due to its lack of reliability. The trend is shifting towards discouraging blind reliance on AI detectors and instead focusing on content quality control (fact-checking, editorial review, peer assessment) rather than origin verification.
5. The new normal—hybrid content creation: More and more writers, whether freelance or in-house, openly admit they use AI as a tool. The stigma is fading. The expectation is that hybrid content creation will become the norm: AI drafts, and humans refine and enhance. This collaboration can produce content that is created more efficiently while remaining creative and authentic.
At CopyRobin, we notice that once we explain our approach, clients are generally very positive. Especially when we show them the benefits: shorter turnaround times, consistency in tone of voice, and content that truly resonates with their audience. The conversation then shifts from “Was AI used?” to “Does this content meet my goals?”—which is exactly what matters.
Quality First, Regardless of Who (or What) Writes
AI-generated content and the associated detection tools understandably raise questions among clients. Anything new takes time to adjust to. However, as someone deeply immersed in this field daily, I can assure you: it’s time to move past the fear and focus on the facts.
AI detection tools like Originality.ai and ZeroGPT may sound impressive, but they are far from foolproof—their assessments should be taken with a grain of salt and certainly not treated as absolute truth. Search engines like Google primarily evaluate what content contributes to the user. AI use is not a problem as long as the end result is valuable, original, and trustworthy.
Our experience at CopyRobin confirms that a smart combination of AI and human expertise delivers the best of both worlds. We can produce content faster and at scale without sacrificing quality. Our clients benefit from high-quality texts that rank well and engage readers—regardless of whether an algorithm helped along the way.
Ask yourself: Does this text serve my audience? Is it accurate, relevant, and well-written?
If you’re a business decision-maker or marketing manager and you have doubts, my advice is: don’t get caught up in AI percentage scores or alarmist fears about Google penalties. Look at the content itself. Ask yourself: Does this text serve my audience? Is it accurate, relevant, and well-written? If the answer is yes, then the text has done its job—whether with a little AI assistance or none at all.
Ultimately, AI is just a means to an end, not the end itself. The goal remains unchanged: high-quality content that your audience and search engines appreciate. And that won’t change, even in this AI-driven era.
Interested in our AI-powered content creation? Book a free consultation with us today.