Invitation to a Community Discussion on the Risks of AI Detection Tools for Creators and Platforms
The Hidden Impact of AI Detection Tools on Writers, Platforms, and Fairness
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F069bbb10-d054-4197-8084-c53c4a5e7181_5472x3072.jpeg)
Dear friends,
I trust this post finds you well. After 17 days of no writing on Medium, I felt compelled to quickly update the community for three important reasons. You can read and comment on my post on Medium earlier to share your thoughts and experiences on these hot issues.
First, my caring audience wanted to know why I was away as they missed my farewell story. Second, the platform’s silence regarding unfair payment issues continues. Third, a noticeable decline in views raises concerns about flagging authentic content as spam or AI-written.
We need community support in these challenging times. Otherwise, the life of a creator can be lonely and stressful. I don’t want this to happen to you, so I regularly keep you informed and engaged. Please check out my blog post on Medium and feel free to share your thoughts like many fellow writers to share ideas and connect with other creators.
Purpose of This Short Post
This newsletter aims to inform you about these growing issues, share insights from our editorial work on AI detection tools, and discuss how these tools now adversely affect creators and platforms. I also hope to gather your input to understand the community’s sentiments and experiences better using a simple poll at the end.
The Bigger Picture: AI Detection Tools and Their Flaws
In 2022, our editorial team conducted a six-month study on over 30 AI detection tools with the input of around 100 volunteers. By the end, we unanimously concluded that these tools were immature and unreliable, producing too many false negatives or false positives. As a result, we stopped using them to save time and prevent unnecessary drama.
Our editors agreed that human judgment remains the best way to identify AI-generated content. Using AI to detect AI feels illogical—like asking a computer to judge the smell or taste of food. Simply put, current AI tools cannot detect human voices using patterns and algorithms.
Despite these findings, AI detection tools like Copyleaks are used by some organizations. For example, Copyleaks, which lists Medium as one of its major partners, has been linked to issues many writers are now facing.
Experts believe this tool might unfairly flag authentic content, but proving this is impossible. We have some indications that I will touch briefly on in the next section. This situation is the heart of the matter, inspiring me to write this post, inform you, and obtain your thoughts.
The Medium Dilemma of Views and Earnings
As discussed in the previous post, recent algorithm changes on Medium were intended to stop scam accounts from abusing the payment pool. While this is a positive step, many writers have reported drastic reductions in their earnings due to an intentional tweak, some dropping from dollars to cents or even zero, despite some stories receiving views and reads.
Adding to this frustration, well-crafted and authentic stories barely get any views, even from writers with large followings or those publishing in prominent publications. For example, some stories, even from writers with sizable followings on a large pub like Illumination, with 189K followers, get under 10 views, usually from direct links shared by writers or editors.
Our technical team analyzed many stories from multiple publications and noticed that around 80% received hardly any views. They hypothesize that AI detection tools could be playing a role in this decline as those stories might be flagged. They can’t think of anything else apart from Medium’s choice of content for paying members. Now, let me briefly explain the three valid concerns related to the AI situation to inform you.
Three Key Concerns About AI Detection Tools
#1 - Unreliable Results Leading to Serious Consequences:
AI detection tools like Copyleaks usually produce false positives, unfairly penalizing writers with advanced English skills or academic backgrounds. For instance, yesterday's short blog post from my voice recording was flagged as 89.5% AI-written, even though I dictated it and lightly edited it using MS Word and Grammarly. This was by Copyleaks. Another tool showed 0%. If Medium uses Copyleaks, my story will automatically flagged as AI-written or spam, preventing its visibility from my audience or publication followers. I am very concerned about Copyleaks as it even flagged my 30-year PhD thesis chapter as AI-written. Many book authors as old as me say that now this tool shows their old book chapters as AI-generated. How absurd this is!
#2 - Loopholes for AI-Generated Content:
Writers who use AI tools consistently have discovered clever ways to bypass detection by using “humanizers,” allowing fully AI-generated content to appear as 0% AI-written. They subscribe to premium services to hide AI-written content. This creates an uneven playing field, as authentic creators may be penalized while AI-generated stories get visibility.
#3 - Data Usage Without Consent:
Experts warn that when AI tools check content, they may use that text for training purposes without the creator’s consent. While Medium prohibits AI training by default and Substack offers an opt-in/out option, this remains a critical concern. Allowing these tools to train on our work could ultimately harm creators by enabling AI companies to penalize us in the future.
Building a Collective Community Voice
As a community leader, I aim to create a meaningful dialogue around these issues and push for better solutions. The debate on whether platforms should use automated AI detection tools or primarily rely on human oversight is needed, and your input is essential. Therefore, I am sending this post with a short poll for your input.
Please respond to this optional/anonymous poll to capture your choice. Additionally, I encourage you to share your perspectives in the comments. Your feedback will help shape a clearer understanding of the community’s stance and could potentially influence Medium to reevaluate its approach. I will add the results of this poll to my upcoming blog posts on Medium and newsletters on Substack.
I didn’t include human checks in the poll because they are a no-brainer and essential for maintaining human voices, which we all desire as writers and readers.
I invite you to join me in advocating for fair and transparent practices that benefit all creators. We cannot afford to stay silent while AI companies offer paid detector and humanizer partnerships that exploit creators and platforms. This situation is a growing and serious societal issue that we can no longer ignore.
In my opinion, we are creating a false economy, wasting our investments, which could be spent on more important societal issues. As creators, we have a voice that can deter this and create a more meaningful future. Your input will be invaluable as I write a comprehensive paper addressing these pressing issues.
Thank you for reading my perspectives and sharing your thoughts.
I updated my Content Marketing Strategy Insights with my Substack books and training videos for members. I will soon add many strategic case studies to inform you and help you grow your audience. There are still 3 days to take the opportunity to be a paid member to benefit from the founding membership tier plus the benefits of Illumination’s Substack Mastery Boost Pilot offerings. I will continue my free services for those who can’t afford paid membership.
Dr Yildiz, I am glad you touched on this important topic as I knew it would be an issue. We discussed and covered this 2 years ago and now suddenly Medium started using AI detectors in desperation. I agree with them not to publish such content but penalizing genuine content is unfair. If they are are bullet point I'd say yes but I know they are not as I tested them so my answer is NO. These tools are useless and should not be used for assessing human work. It is not too hard to notice AI written content by readers or editors. You also touched on these humanizers which allow AI-written content.
It is a big NO from me because I want these tools to disappear. They are useless. They cause confusion and might corrupt the writing world now encouraging humanizers which is type of a cheating. I am also against the use of AI tools except for language checking like Grammarly or others. Human checking is the gold standard. As you say it is like asking to smell or taste food for us.