Ban chatGPT for answers and resources

It’s really easy to ban ChatGPT answers:

  1. Add a rule to the developer forum rules that ChatGPT answers aren’t allowed in #help-and-feedback

That’s it. It’s just one step.

It might seem like it’s hard, but it’s actually shockingly easy to tell.

As far as moderating ChatGPT answers, you hit the nail on the head:

People can use common sense to determine if something is from ChatGPT. Looking at the users other answers can also confirm with even more certainty.

By officially making it a rule, it’s very clear to users that they shouldn’t spam ChatGPT answers. By now though it seems like most people spamming ChatGPT answers have realized that:

  1. The answers are almost never correct
  2. Most people can tell their answers are from ChatGPT
  3. Most people do not appreciate spam

and have stopped posting ChatGPT answers, so I don’t think this is a big deal anymore. I still think officially adding it to the rules would further solve the problem though.

Still wouldn’t work though, since DevForum moderators aren’t that capable of figuring out if someone’s using A.I to post or not. They also don’t moderate anybody without actual proof.

ChatGPT also evolves on a daily basis, meaning that’ll it’ll just get harder to figure out if posts are made by a real human as time goes on. It’ll eventually be indistinguible within the next few years.

Players would most likely abuse it by falsely claiming that innocent posters with sufficient grammar are just using ChatGPT, likely starting a toxic new trend where DevForum members get unfairly punished or banned on a regular basis.

So while these ideas do sound like a great idea on paper, they would really only cause more problems for everyone here and would even make DevForum moderation slower as a result.

That’s why it’s always best to think ahead before suggesting something that could turn out wrong.

It would be great if we could have this concept without any problems, but it realistically wouldn’t happen.

1 Like

10 Likes

What is the difference between this and how it is already? People can already maliciously flag posts. The forums are set up to deal with that.

If it’s against the rules officially people can flag the posts. If one user gets all of their spammed ChatGPT answers flagged, moderation will take care of the rest. It’s really not that hard.

I don’t understand why you don’t think this isn’t super easy:

  1. It’s really obvious when #help-and-feedback posts are from ChatGPT. I can DM you some to show you. (There are a few users who only post ChatGPT replies, so it’s easy to get a few examples.)
  2. If it’s officially against the rules, ChatGPT answers can just be flagged. There doesn’t really need for more moderation than that. I think it’s not very smart to just give up on something because it’s an incomplete solution.

This post is just asking for ChatGPT answers to be against the rules officially. ChatGPT posts are already getting flagged either way. The thread is just asking for it to be official.

I wouldn’t actually consider this to be proof though, since that kind of thing feels more like guessing or assumption tbh. It’s an okay idea, but they still wouldn’t handle out moderation actions over this.

I was actually referring the ChatGPT brand as a whole, paid or otherwise. The developers could also tweak ChatGPT-3 to make it sound more realistic later on, even without user data.

It’s honestly a lot more easier to just claim that someone’s using ChatGPT, rather than saying that some other rule is being broken.

That’s the kind of thing moderators can make mistakes on aswell if they ever had to decide if someone’s using A.I or not, so many flag spammers would probably just go that route.

From what I can tell, you’ve only been on DevForum for less than 3 months…so I’ll just give you a quick breakdown of what DevForum moderation is like nowadays.

Basically, Roblox is like Valve and TF2 when it comes to DevForum. They spend years trying to do forum updates that should be very simple and easy to accomplish.

On top of that, the moderation system is very flawed to the point where only 1-2 moderators are shown online at the same time and tend to only be active for short periods.

Sometimes the DevForum can be completely unmoderated for a few hours even, especially on nights or weekends since the staff team are American unlike Roblox moderation where it’s outsourced worldwide.

That’s pretty much why banning ChatGPT isn’t feasible, because they simply don’t have the time and resources to moderate a new rule like that yet.

And if even they tried to do something like this in their current state, they would probably just make constant mistakes and innocent players would just get incorrectly punished even.

I would just continue flagging ChatGPT posts the way it’s currently being done then, because what you’re asking for just isn’t something they’re capable of doing.

It’s depressing for sure, but lot of people on here already know how Roblox is when it comes to DevForum updates. So maybe just lower your expectations a little bit.

2 Likes

“What I’m asking for” is for them to add a rule to the official forum rules. Obviously that isn’t very hard, and I agree strictly moderating the rule would be difficult/not feasible.

I don’t think it matters if the moderators moderate accounts, I just think adding an official rule would discourage people from spamming ChatGPT posts.

The mods just look through the flagged posts and quickly chose a moderation action (they lose accuracy but it’s much more efficient). It doesn’t take very long.

Still a problem though, especially when one user is just completely spamming the forums with garbage while the mods are offline.

I’m not saying they should take action against accounts for this, I’m just saying they should add it to the rules so it can be flagged like every other rule.

(Currently people spamming ChatGPT answers seem to have good intentions, but don’t understand that they aren’t being helpful. Making this an official rule makes it clear they shouldn’t.)

This is one of my accounts, I’ve been on the forums for about 3 years now (definitely not as long as you though).

tl;dr:

  • I think they should add “ChatGPT replies aren’t allowed in #help-and-feedback” to the official rules
  • I think making this official would greatly reduce the number of people posting ChatGPT answers
  • I don’t think it’s necessary or very feasible for the moderators to strictly enforce this rule
  • I’m definitely not saying there should be any sort of “update” for this. Just a reply to the official rules thread
More replies

They are tweaking it to fix specific things. They’ve removed some exploits and obvious problems that make it look bad. It use to not be able to do 1+1 (which was very bad PR) so they very explicitly put that into the AI. A lot of jailbreak prompts/methods were also patched, though it’s still pretty easy to do.

The improvements as far as how it sounds though will probably be small. The next AI will have way more parameters, so it’s not very smart to train in a lot more data to improve their smaller model. Instead, it would be better to train the larger model with more data later and then reduce the model resolution if they want a smaller version.

I totally agree they shouldn’t hand out moderation action: it would be mildly time consuming to check each user. It really isn’t that hard to tell if a response is from ChatGPT though, most people spamming haven’t created a prompt to properly replicate the way real people write on the forums so it’s pretty darn easy to tell.

I think updating the forums is pretty low priority. The forums are just built on Discourse, so they probably don’t have people who are very experienced modding it.

I think leaving developers to decide what is ChatGPT and what isn’t is fine, at least for now. It’s pretty obvious currently.

I agree they shouldn’t make moderation decisions unless it’s really obvious (e.i. one user posts 10 4-paragraph replies to help and feedback that are all nonsensical garbage in under 5 mins).

Even then though I don’t think moderating is super important. Forum users general just follow rules and for something as low impact as this I don’t think it’s a big deal if they just don’t strictly enforce it after adding it.

1 Like

Sometimes bots do better than people too. Banning all wrong answers/resources is what you should aim for.

What needs to happen is that posts generated by chatGPT should be marked as:

This post includes AI generated content

AI generated posts are fine, but we need to be marked as AI generated.

If your entire purpose on the forum was to post AI generated posts and code then you don’t deserve to be on the forum. These posts have been proven time to time to be unhelpful and it doesn’t need to be on a forum where people expect answers from real people.

If there was a thing that said it was generated by ai. It would pretty much be pointless because it may not work anyways for people who seek help in #help-and-feedback:scripting-support . What would be the point of putting something “This post was made by ai” When you could literally use the ai itself for your own needs to help yourself?

Also people who make ai generated posts in #development-discussion and other categories most likely want to farm likes like the OP said.

AI generated posts need to be banned because it’s just going to clutter the forum with replies.

3 Likes

Okay I decided I’ve changed my mind about factually correct information generated by chat GPT being allowed. I think outright banning it entirely is a reasonable solution, at least until it can improve enough to be reliably correct (as SOF did), so indefinitely. I believe, as it stands, that AI has no place on this forum unless a developer is working on their own AI in Roblox. I suppose that goes for any forum where complex information/concepts are being shared. The number of AI generated responses in the support categories is steadily increasing and it’s genuinely becoming exasperating trying to read over peoples’ blatant misinformation trying to correct it before a novice developer tries using it in their game for it to inevitably fail and give them an unintended if not completely dysfunctional result.

What’s even worse is when people put in effort articulating a long detailed post about a concept that clearly explains what is going on only to be completely ignored and have an AI generated response accepted even if it isn’t correct.

I noticed that chat GPT uses a few specific sentence structures, so those could probably be used to help determine whether or not posts are AI generated.

From a reply on the ban post on SOF,

They all look pretty much the same: perfect grammar, short sentences with exactly high-school English, pandering/conversational tone. Much of the time, even if they’re right, they’re answering a different question than posted by OP or there’s something pretty clearly “off”. All the code snippets have obnoxiously helpful comments in them. The answerer will generally not take the time to format inline code. They stick out like a sore thumb after you’ve seen 3 or 4.

Some common occurrences are things like:

  • To ____,
  • Very long, detailed responses for a basic task
  • Often ends in “Hope this helps!”
Funny enough I asked what are some common words used in responses and it actually pointed out some I never noticed before:

All in all, I think a moderator occasionally falsely assuming a response is AI generated is a much more favourable outcome than allowing often incorrect responses by chat GPT. Especially when responses are flagged and removed, they can often be appealed.

Again, and I can’t stress this enough, if people want AI responses, they can go to the site and generate it themselves. There is no need and no place on this forum to post AI responses.

2 Likes

It shouldn’t be banned it gives code snippets all of its resources are based on github resources, if a noob has ai generated code and doesn’t understand it, atleast give them some help with it and it’s like those youtube tutorials to teach people how to write code.

I have been eyeing out this topic ever since it was published, I might be biased. Just giving my own personal opinion.

ChatGPT is basically AI, it takes in human information and outputs them dynamically to whatever you write.


As, @SOTR654 said. It is not a complete solution. Just a help.
image
Pretty sure its also missing 2 years of data.

Some people like to use it to reply to multiple different topics to try to gain likes, fame or whatever their main form of motivation is in using ChatGPT to reply to EVERYTHING.
Which falls under spam, vandalization. Basically pointless mostly long lines of code that does nothing. Most people don’t even test the code before publishing it.

I’ll use a few points from here


and from here

(i rather show where my opinions originate from than to rewrite them.)

In my opinion, instead of using impractical AI that’s outdated. Its better to use your eyes, fingers and brain to just google whatever the topic is. Try to learn, and then teach the person who originally asked the question.

ChatGPT isn’t even only used for code. Its used for replies as well.
For example:


Literally admits to using ChatGPT

Now look at how they really talk.



Note how the first screenshot I showed shows how many likes they got, which is probably one of the many reasons people post low effort garbage generated by AI.

If we banned ChatGPT or any sort of AI, it’d benefit the DevForums a ton.
I am pretty sure some of my posts have been replied to by people who use AI as well. I rather not call them out randomly because I am not sure.

3 Likes

@DevRelationsTeam

This is still a huge issue with the Roblox Developer forums.

Low quality AI generated posts are becoming more common. At minimum there should be a rule that AI generated posts were to be marked as such. And low quality erronous code AI generated be againts the rules.

2 Likes

So with the new announcement we can be sure AI code wont ba banned from the forum, can’t we ? Generative AI on Roblox: Our Vision for the Future of Creation

We can’t fight against the future.

The objective is not to totally ban AI generated stuff.

What we need is:

  • Have a rule that all AI generated posts with a flair that tells that the post contains
  • Ban people who post invalid or unhelpfull or broken AI code&responses. Also prohibit mass farming of low quality AI content
  • Have a complete ban of AI content on #development-discussion and #updates

Hey folks, just letting you know we have seen this feature request and are keeping an eye on this.

For now we are not taking any specific action here, but folks should be aware that you are responsible and accountable for anything you post under your account. You must ensure that you are contributing meaningfully to the progression of the discussion and not posting misinformation.

Most AI conversational apps out there still exhibit tendencies of producing incorrect or unrelated content to your query, and so you would put your credibility as a creator at risk by posting this content on the forum, as well as your forum moderation status if you are found excessively detracting from discussions.

Our recommendation is that you do not post such content on the forum unless you have the expertise to fact-check the response properly to make sure it adds value to the discussion or support question that you are posting on. Be sure to adjust any incorrect or missing information before posting.

We’ll keep an eye on how this develops and may change this policy over time.

40 Likes

^ Marking this as solution for higher visibility since there are many replies in this topic.

7 Likes

But the thing I fear most is that the people viewing the factually incorrect information, that they report it, then the moderator views it and it looks correct, so it’s reinstated and/or the reporter is punished (concern mainly arises from potential false invocation of rule 8.2)

Further, looking into the broken rules matrix, there is no rule made against misleading or factually incorrect information. There are a few rules that may vaguely relate to misinformation (namely the rules about low-quality posts, spam, and off-topic posts) but from what I see, there is no rule talking about misinformation which is the biggest point people under this post are making. People are generally not against AI generated content as a whole, just the incorrect information.

Again I understand you said it would damage the poster’s reputation or credibility but please hear me out, realistically someone (especially if it’s someone young) googling and stumbling upon a post with incorrect AI generated response wouldn’t know that it is AI generated, and wouldn’t know that the information is not correct. So right away they’ve wasted time trying to implement the solution. Especially if it’s a very niche topic with few other posts on the subject, and/or if the post gets put as one of the top results on google.

Also as time goes on, the number of AI generated responses is steadily increasing and this forum will inevitably turn into a cesspool experiment of people posting questions and marking incorrect information as solutions, or using AI to create a conversation piece where other people will ask chat GPT to write a response to that topic, which really isn’t what this forum should be about. Sure, there is a rule that exists against meaningless or non-contributive responses but really a moderator who gets hundreds of reports a day looking over a long opinionated post in the development discussion won’t understand right away why a person flagged the response as meaningless unless they’re actually looking for an AI generated one.

So I am asking you and to whoever else may be considering the implementation of this feature request, to please reconsider banning it before it turns into another clothing catalog fiasco (except with clothing it’s an oversaturation of AI responses). I mean look at the categories now only a few months after the 3rd version of chatGPT
was released, we already have an influx of these responses.

Outright banning AI content would result in a simple blanket solution for all the issues mentioned in this response, and to the points people are making. Realistically, and I think many of the other people who’ve given input, believe that even if you don’t ban AI responses outright, at least making an amendment to the rules to disallow factually incorrect information would also be sufficient.

2 Likes

I thunk the best course of action would be to implement the following rules:

  • Have a rule that all AI generated posts need to include a flair that tells that the post contains AI generated content
  • Ban people who post invalid or unhelpfull or broken AI code&responses. Also prohibit mass farming of low quality AI content
  • Have a complete ban of AI content on #development-discussion and #updates

This would be the most pragmatic approach I can think of.
It would ensure that the afformentioned problems with AI generated content are dealth with while not overshooting to micromodding. It would deal with all of the issues without requiring a total blanket ban and enforcing it would still be sufficiently easy.

Regardeless complete inaction on Robloxes part is becoming less sufficient due to the amount of AI content growing steadily.

1 Like

Thank you for the thoughtful response and I understand your concerns! I’ll elaborate a bit further based on your response here:

It is important to note that misinformation is not exclusive to AI-generated content. Humans are also capable of posting inaccurate or misleading information.

Additionally, especially as AI models continue to improve, it is not possible for our moderators to say with high confidence whether a post was written by AI or by a human. Therefore, we need to rely on the community here to help set the record straight by correcting misinformation and using replies, Likes and solutions to boost the right posts, regardless of the existence of AI content.

It is covered by “Global Rule 5”, since misinformation is non-contributive and does not help with meaningful topic progression, but I can forward the feedback that maybe we can be more clear on that.

We really need the community to help set straight incorrect information, as you folks are the experts in this domain, and our moderators may not always have the same level of context.

Make sure to keep in mind that the person may not be intentionally posting incorrect information but may simply be misinformed or lacking of certain knowledge.


Appreciate your input and we will definitely keep monitoring the situation. As with all content on the forum, it is up to the individual poster to ensure that they are contributing meaningfully to the discussion and not posting misinformation.

PS: I’m going to reach out in private messages to get some examples of where there is very derailing use of AI content where our moderators have not taken action.

7 Likes