“What I’m asking for” is for them to add a rule to the official forum rules. Obviously that isn’t very hard, and I agree strictly moderating the rule would be difficult/not feasible.
I don’t think it matters if the moderators moderate accounts, I just think adding an official rule would discourage people from spamming ChatGPT posts.
The mods just look through the flagged posts and quickly chose a moderation action (they lose accuracy but it’s much more efficient). It doesn’t take very long.
Still a problem though, especially when one user is just completely spamming the forums with garbage while the mods are offline.
I’m not saying they should take action against accounts for this, I’m just saying they should add it to the rules so it can be flagged like every other rule.
(Currently people spamming ChatGPT answers seem to have good intentions, but don’t understand that they aren’t being helpful. Making this an official rule makes it clear they shouldn’t.)
This is one of my accounts, I’ve been on the forums for about 3 years now (definitely not as long as you though).
tl;dr:
- I think they should add “ChatGPT replies aren’t allowed in #help-and-feedback” to the official rules
- I think making this official would greatly reduce the number of people posting ChatGPT answers
- I don’t think it’s necessary or very feasible for the moderators to strictly enforce this rule
- I’m definitely not saying there should be any sort of “update” for this. Just a reply to the official rules thread
More replies
They are tweaking it to fix specific things. They’ve removed some exploits and obvious problems that make it look bad. It use to not be able to do 1+1 (which was very bad PR) so they very explicitly put that into the AI. A lot of jailbreak prompts/methods were also patched, though it’s still pretty easy to do.
The improvements as far as how it sounds though will probably be small. The next AI will have way more parameters, so it’s not very smart to train in a lot more data to improve their smaller model. Instead, it would be better to train the larger model with more data later and then reduce the model resolution if they want a smaller version.
I totally agree they shouldn’t hand out moderation action: it would be mildly time consuming to check each user. It really isn’t that hard to tell if a response is from ChatGPT though, most people spamming haven’t created a prompt to properly replicate the way real people write on the forums so it’s pretty darn easy to tell.
I think updating the forums is pretty low priority. The forums are just built on Discourse, so they probably don’t have people who are very experienced modding it.
I think leaving developers to decide what is ChatGPT and what isn’t is fine, at least for now. It’s pretty obvious currently.
I agree they shouldn’t make moderation decisions unless it’s really obvious (e.i. one user posts 10 4-paragraph replies to help and feedback that are all nonsensical garbage in under 5 mins).
Even then though I don’t think moderating is super important. Forum users general just follow rules and for something as low impact as this I don’t think it’s a big deal if they just don’t strictly enforce it after adding it.