Ban chatGPT for answers and resources

You assume that everyone who is looking for help knows about chatbots and knows how to create inquiries that will provide them the correct answer.

As has been pointed out, the chatbot is a very new technology and a lot of people don’t fully understand how to get them to work. If a person who is not familiar with Luaa or coding tries a simple query they will say that 9 out of 10 times the chatbot doesn’t give them what they want.

On the other hand, if I convert what they want into a query the chatbot can understand better I can produce quality code that (so far) more than 80% of the time solves the problem.

Again, if I come to these forums it’s because I have a problem that needs solves. Do most coders care if the answer is AI generated, even if that answer solves their problem?

Are you trying to talk yourself into believing that just because you made a robot write something for you means you made it?

You solved nothing, you merely posted what an AI told you (likely without even verifying that the answer was correct).
There’s giving people bad advice by accident because you genuinely didn’t know any better, and then there’s just outright negligence from just copy+pasting whatever you get told by a computer.
You shouldn’t be proud that you’ve spouted so much misinformation, especially misinformation that you didn’t even create.

5 Likes

The difference between you and them:
They tried. Like actually tried. A human was behind that answer. Not a bot.

Unlike them, you gave a solution, an incorrect one on that, with no effort. Didn’t even bother to see if it was right. Giving incorrect information is wrong on so many levels. If you must use the bot, make sure you aren’t providing outdated/incorrect information.

Honestly I’m fine with ChatGPT for answers as long as it’s not creating unhelpful feedback. If you are checking your answer or adding a disclaimer it’s made by AI (and you’re not replying to future problems, for example “fixes” that fixed the old problem but made a new one), sure, go ahead. I don’t care. The problem is lazy people are saying “oh this person needs help welp time to go to ChatGPT”. Spoiler alert: It’s not helping.

2 Likes

What I’m noticing here is that many people are demanding ChatGPT responses to be banned, but yet they’re not actually providing any solutions on how to make that a reality.

…Like, how exactly are you supposed to tell the difference between a person and A.I in terms of typing answers on an online forum?

Because literality anyone that uses correct grammar and proper punctuation can easily look like an A.I generated response aswell, which is exactly why they shouldn’t be moderated.

This kind of thing can be very abuseable once you allow faulty A.I detectors to carry out moderation actions towards players, meaning that anyone that uses DevForum would be put at risk.

I understand being annoyed at ChatGPT responses, but Roblox just isn’t responsible enough to handle an issue like this yet, since they can’t even moderate the DevForum properly on weekends or nights in their current state.

So the best thing you can do is just flag any answers that either make no sense or are just wrong entirely, because more faulty bot moderation is the last thing we need right now.

3 Likes

It’s really easy to ban ChatGPT answers:

  1. Add a rule to the developer forum rules that ChatGPT answers aren’t allowed in #help-and-feedback

That’s it. It’s just one step.

It might seem like it’s hard, but it’s actually shockingly easy to tell.

As far as moderating ChatGPT answers, you hit the nail on the head:

People can use common sense to determine if something is from ChatGPT. Looking at the users other answers can also confirm with even more certainty.

By officially making it a rule, it’s very clear to users that they shouldn’t spam ChatGPT answers. By now though it seems like most people spamming ChatGPT answers have realized that:

  1. The answers are almost never correct
  2. Most people can tell their answers are from ChatGPT
  3. Most people do not appreciate spam

and have stopped posting ChatGPT answers, so I don’t think this is a big deal anymore. I still think officially adding it to the rules would further solve the problem though.

Still wouldn’t work though, since DevForum moderators aren’t that capable of figuring out if someone’s using A.I to post or not. They also don’t moderate anybody without actual proof.

ChatGPT also evolves on a daily basis, meaning that’ll it’ll just get harder to figure out if posts are made by a real human as time goes on. It’ll eventually be indistinguible within the next few years.

Players would most likely abuse it by falsely claiming that innocent posters with sufficient grammar are just using ChatGPT, likely starting a toxic new trend where DevForum members get unfairly punished or banned on a regular basis.

So while these ideas do sound like a great idea on paper, they would really only cause more problems for everyone here and would even make DevForum moderation slower as a result.

That’s why it’s always best to think ahead before suggesting something that could turn out wrong.

It would be great if we could have this concept without any problems, but it realistically wouldn’t happen.

1 Like

10 Likes

What is the difference between this and how it is already? People can already maliciously flag posts. The forums are set up to deal with that.

If it’s against the rules officially people can flag the posts. If one user gets all of their spammed ChatGPT answers flagged, moderation will take care of the rest. It’s really not that hard.

I don’t understand why you don’t think this isn’t super easy:

  1. It’s really obvious when #help-and-feedback posts are from ChatGPT. I can DM you some to show you. (There are a few users who only post ChatGPT replies, so it’s easy to get a few examples.)
  2. If it’s officially against the rules, ChatGPT answers can just be flagged. There doesn’t really need for more moderation than that. I think it’s not very smart to just give up on something because it’s an incomplete solution.

This post is just asking for ChatGPT answers to be against the rules officially. ChatGPT posts are already getting flagged either way. The thread is just asking for it to be official.

I wouldn’t actually consider this to be proof though, since that kind of thing feels more like guessing or assumption tbh. It’s an okay idea, but they still wouldn’t handle out moderation actions over this.

I was actually referring the ChatGPT brand as a whole, paid or otherwise. The developers could also tweak ChatGPT-3 to make it sound more realistic later on, even without user data.

It’s honestly a lot more easier to just claim that someone’s using ChatGPT, rather than saying that some other rule is being broken.

That’s the kind of thing moderators can make mistakes on aswell if they ever had to decide if someone’s using A.I or not, so many flag spammers would probably just go that route.

From what I can tell, you’ve only been on DevForum for less than 3 months…so I’ll just give you a quick breakdown of what DevForum moderation is like nowadays.

Basically, Roblox is like Valve and TF2 when it comes to DevForum. They spend years trying to do forum updates that should be very simple and easy to accomplish.

On top of that, the moderation system is very flawed to the point where only 1-2 moderators are shown online at the same time and tend to only be active for short periods.

Sometimes the DevForum can be completely unmoderated for a few hours even, especially on nights or weekends since the staff team are American unlike Roblox moderation where it’s outsourced worldwide.

That’s pretty much why banning ChatGPT isn’t feasible, because they simply don’t have the time and resources to moderate a new rule like that yet.

And if even they tried to do something like this in their current state, they would probably just make constant mistakes and innocent players would just get incorrectly punished even.

I would just continue flagging ChatGPT posts the way it’s currently being done then, because what you’re asking for just isn’t something they’re capable of doing.

It’s depressing for sure, but lot of people on here already know how Roblox is when it comes to DevForum updates. So maybe just lower your expectations a little bit.

2 Likes

“What I’m asking for” is for them to add a rule to the official forum rules. Obviously that isn’t very hard, and I agree strictly moderating the rule would be difficult/not feasible.

I don’t think it matters if the moderators moderate accounts, I just think adding an official rule would discourage people from spamming ChatGPT posts.

The mods just look through the flagged posts and quickly chose a moderation action (they lose accuracy but it’s much more efficient). It doesn’t take very long.

Still a problem though, especially when one user is just completely spamming the forums with garbage while the mods are offline.

I’m not saying they should take action against accounts for this, I’m just saying they should add it to the rules so it can be flagged like every other rule.

(Currently people spamming ChatGPT answers seem to have good intentions, but don’t understand that they aren’t being helpful. Making this an official rule makes it clear they shouldn’t.)

This is one of my accounts, I’ve been on the forums for about 3 years now (definitely not as long as you though).

tl;dr:

  • I think they should add “ChatGPT replies aren’t allowed in #help-and-feedback” to the official rules
  • I think making this official would greatly reduce the number of people posting ChatGPT answers
  • I don’t think it’s necessary or very feasible for the moderators to strictly enforce this rule
  • I’m definitely not saying there should be any sort of “update” for this. Just a reply to the official rules thread
More replies

They are tweaking it to fix specific things. They’ve removed some exploits and obvious problems that make it look bad. It use to not be able to do 1+1 (which was very bad PR) so they very explicitly put that into the AI. A lot of jailbreak prompts/methods were also patched, though it’s still pretty easy to do.

The improvements as far as how it sounds though will probably be small. The next AI will have way more parameters, so it’s not very smart to train in a lot more data to improve their smaller model. Instead, it would be better to train the larger model with more data later and then reduce the model resolution if they want a smaller version.

I totally agree they shouldn’t hand out moderation action: it would be mildly time consuming to check each user. It really isn’t that hard to tell if a response is from ChatGPT though, most people spamming haven’t created a prompt to properly replicate the way real people write on the forums so it’s pretty darn easy to tell.

I think updating the forums is pretty low priority. The forums are just built on Discourse, so they probably don’t have people who are very experienced modding it.

I think leaving developers to decide what is ChatGPT and what isn’t is fine, at least for now. It’s pretty obvious currently.

I agree they shouldn’t make moderation decisions unless it’s really obvious (e.i. one user posts 10 4-paragraph replies to help and feedback that are all nonsensical garbage in under 5 mins).

Even then though I don’t think moderating is super important. Forum users general just follow rules and for something as low impact as this I don’t think it’s a big deal if they just don’t strictly enforce it after adding it.

1 Like

Sometimes bots do better than people too. Banning all wrong answers/resources is what you should aim for.

What needs to happen is that posts generated by chatGPT should be marked as:

This post includes AI generated content

AI generated posts are fine, but we need to be marked as AI generated.

If your entire purpose on the forum was to post AI generated posts and code then you don’t deserve to be on the forum. These posts have been proven time to time to be unhelpful and it doesn’t need to be on a forum where people expect answers from real people.

If there was a thing that said it was generated by ai. It would pretty much be pointless because it may not work anyways for people who seek help in #help-and-feedback:scripting-support . What would be the point of putting something “This post was made by ai” When you could literally use the ai itself for your own needs to help yourself?

Also people who make ai generated posts in #development-discussion and other categories most likely want to farm likes like the OP said.

AI generated posts need to be banned because it’s just going to clutter the forum with replies.

3 Likes

Okay I decided I’ve changed my mind about factually correct information generated by chat GPT being allowed. I think outright banning it entirely is a reasonable solution, at least until it can improve enough to be reliably correct (as SOF did), so indefinitely. I believe, as it stands, that AI has no place on this forum unless a developer is working on their own AI in Roblox. I suppose that goes for any forum where complex information/concepts are being shared. The number of AI generated responses in the support categories is steadily increasing and it’s genuinely becoming exasperating trying to read over peoples’ blatant misinformation trying to correct it before a novice developer tries using it in their game for it to inevitably fail and give them an unintended if not completely dysfunctional result.

What’s even worse is when people put in effort articulating a long detailed post about a concept that clearly explains what is going on only to be completely ignored and have an AI generated response accepted even if it isn’t correct.

I noticed that chat GPT uses a few specific sentence structures, so those could probably be used to help determine whether or not posts are AI generated.

From a reply on the ban post on SOF,

They all look pretty much the same: perfect grammar, short sentences with exactly high-school English, pandering/conversational tone. Much of the time, even if they’re right, they’re answering a different question than posted by OP or there’s something pretty clearly “off”. All the code snippets have obnoxiously helpful comments in them. The answerer will generally not take the time to format inline code. They stick out like a sore thumb after you’ve seen 3 or 4.

Some common occurrences are things like:

  • To ____,
  • Very long, detailed responses for a basic task
  • Often ends in “Hope this helps!”
Funny enough I asked what are some common words used in responses and it actually pointed out some I never noticed before:

All in all, I think a moderator occasionally falsely assuming a response is AI generated is a much more favourable outcome than allowing often incorrect responses by chat GPT. Especially when responses are flagged and removed, they can often be appealed.

Again, and I can’t stress this enough, if people want AI responses, they can go to the site and generate it themselves. There is no need and no place on this forum to post AI responses.

2 Likes

It shouldn’t be banned it gives code snippets all of its resources are based on github resources, if a noob has ai generated code and doesn’t understand it, atleast give them some help with it and it’s like those youtube tutorials to teach people how to write code.

I have been eyeing out this topic ever since it was published, I might be biased. Just giving my own personal opinion.

ChatGPT is basically AI, it takes in human information and outputs them dynamically to whatever you write.


As, @SOTR654 said. It is not a complete solution. Just a help.
image
Pretty sure its also missing 2 years of data.

Some people like to use it to reply to multiple different topics to try to gain likes, fame or whatever their main form of motivation is in using ChatGPT to reply to EVERYTHING.
Which falls under spam, vandalization. Basically pointless mostly long lines of code that does nothing. Most people don’t even test the code before publishing it.

I’ll use a few points from here


and from here

(i rather show where my opinions originate from than to rewrite them.)

In my opinion, instead of using impractical AI that’s outdated. Its better to use your eyes, fingers and brain to just google whatever the topic is. Try to learn, and then teach the person who originally asked the question.

ChatGPT isn’t even only used for code. Its used for replies as well.
For example:


Literally admits to using ChatGPT

Now look at how they really talk.



Note how the first screenshot I showed shows how many likes they got, which is probably one of the many reasons people post low effort garbage generated by AI.

If we banned ChatGPT or any sort of AI, it’d benefit the DevForums a ton.
I am pretty sure some of my posts have been replied to by people who use AI as well. I rather not call them out randomly because I am not sure.

3 Likes

@DevRelationsTeam

This is still a huge issue with the Roblox Developer forums.

Low quality AI generated posts are becoming more common. At minimum there should be a rule that AI generated posts were to be marked as such. And low quality erronous code AI generated be againts the rules.

2 Likes

So with the new announcement we can be sure AI code wont ba banned from the forum, can’t we ? Generative AI on Roblox: Our Vision for the Future of Creation

We can’t fight against the future.

The objective is not to totally ban AI generated stuff.

What we need is:

  • Have a rule that all AI generated posts with a flair that tells that the post contains
  • Ban people who post invalid or unhelpfull or broken AI code&responses. Also prohibit mass farming of low quality AI content
  • Have a complete ban of AI content on #development-discussion and #updates