Well, well, well, what do we have here? The guardians of Stack Overflow, those volunteer moderators who’ve turned the site into their personal fiefdom, are having a tantrum. As of June 5th, 2023, they’ve gone on a historic general moderation strike, joined by over 850 contributors and users.
Their beef? Stack Overflow, Inc. isn’t giving them the attention they feel they deserve regarding being able to police AI-generated answers. To which I say, “Welcome to the club, mates.”
These self-appointed stewards of quality, who’ve spent years closing valid questions and deleting good answers on a whim, are suddenly concerned about the core mission of the Stack Exchange network. The irony is almost too delicious to swallow.
One of their primary grievances revolves around a new policy restricting AI-generated content removal. They fear it will lead to an influx of inaccurate information and plagiarism, eroding trust in Stack Overflow. I can’t help but chuckle because trust was eroded years ago, as far as I am concerned.
Then they whine about recent policies that allegedly undermine the autonomy of individual Stack Exchange communities, leaving them feeling marginalised and disempowered. Sounds familiar, doesn’t it? Thanks to their heavy-handed moderation, it’s the same feeling countless new users have experienced.
And I also want to point out that while I am generalising here, not all moderators are responsible for the demise of Stack Overflow. But, if you search Hacker News, blogs and other parts of the web, you’ll find plenty of tales of heavy-handed moderation. You can even see it for yourself by browsing through Stack Overflow.
In their strike, they’ve decided to suspend activities like flagging, closing posts, and more. They demand a policy change to allow them to enforce policies against AI-generated answers. Meanwhile, Stack Overflow, Inc. is holding its ground, pointing out that the tools moderators used to detect AI-generated content had a high rate of false positives, leading to unnecessary suspensions of innocent users.
We’ve seen the same situation play out in academia. Students are being falsy accused of using ChatGPT and other AI tools, even though their claimed accuracy is far from the truth. So, I am not siding with the moderators because there is currently no definitive way to know if something is AI-generated. Sometimes there are tells, but oftentimes, you can’t tell.
An even more bonkers story is about a Texas professor that failed an entire class because his detection methods said the entire class used ChatGPT. Basically, in this instance, the professor just asked ChatGPT if it wrote the texts he fed it, and it said yes for everyone. The chances of every student cheating would be incredibly low. This is another example of heavy-handed and accusatory accusations around AI usage.
So, to the striking moderators, I say this: You’re not the victims here. You’re part of the problem. You’ve spent years running Stack Overflow like a private club, alienating new users and stifling the spirit of the community. Don’t act surprised when the tables turn. You’ve been ignoring users for years.
Comparing that silly professor story to moderators at stack overflow who carefully check the messages?!?!? Most of the mods dont even use AI detectors!
And yeah, sometimes things get deleted that shouldn’t, but look at what I found on the web: “As of November 2022, there are more than 100 million visitors on Stack Overflow every month.”
Isnt it normal for many hiccups in such a large community? Most users do just fine there.
also, I am surprised how you say “trust was eroded years ago”. As a developer, Stack Overflow is the place where I’ve gotten the best answers to my development question within a wide range of topics. and most of the accepted answers there work flawlessly. I trust stack overflow a lot. I hope this AI thing wont bring down the answer quality.
@Stack Overflow User
The moderators themselves aren’t being compared to the professor in the linked story, the accuracy of AI detectors is what is being compared. Sure, in some cases it might be obvious where an answer was AI generated because it’s completely wrong, but, the issue being discussed here are the moderators wanting to use these tools. They do not work and their claimed high accuracies are also wrong.
StackOverflow has served us well for years now and I know it has helped me many times. But, that doesn’t discount the fact that there are many documented instances of moderators who got it wrong. Valid questions being marked off topic and other mistakes, where the process for appealing those is not fun to deal with.
We are generalising here. I am not saying StackOverflow is completely shit and terrible. But to say the way it has operated all these years hasn’t been without issue and seemingly, there have been no efforts I am aware of to make SO or other StackExchange network sites less hostile to newcomers.
The problem SO finds itself in is that it has become a place that has good quality and helpful answers for some questions, but it’s such an off-putting place that nobody wants to answer questions. So you have this situation where people mostly use SO in a read-only sense and despite the fact they might have knowledge to share, they’re put off.
Admittedly, you can get better answers on Reddit subreddits for specific topics where there isn’t nearly as much toxicity and people are more inclined to help.
100% spot on!
Moderators who have ruined this community now are whining about being moderated. They’re pathetic.
I participate in the physics one mainly and it seems that only a few moderators spend their lives rewriting every questions and then answering them.
Any out of the box question or answer is removed.
Imagine spending your life moderating people i order to get gratification points on a website. How sick is this!