Ban chatGPT for answers and resources

I didn’t know it wasn’t reliable and wasn’t planning on attacking users with it. Ill edit it out of my reply.

ChatGPT answers really should be banned. I’ve come across multiple people with zero experience copy pasting answers from ChatGPT. They look like legitimate answers but are complete bogus.

Example (there are multiple I’ve found, this isn’t even the worst):

ChatGPT made up almost all of the details about the API above. It says GetPartBoundsInRadius checks a bounding box (not even remotely true). It says GetPartsInRegion3 should be used to check for collisions in a sphere (what?!). It says the size of the bounding box (the OP wants to get spherical collisions) should be adjusted (the spheres have constant size … ?) and the gravity too (also constant?).

ChatGPT spits out incorrect gibberish as far as writing code explanations. It’s designed to replicate language patterns, not make ideas. It’s fine at writing simple code (only a few typos and nonexistent API, but usually the right idea), but it writes code tutorials from no information. It doesn’t actually understand how code/API/Roblox’s engine works, it just is able to write like it does.

People can get ChatGPT answers if they want them, there is no reason for someone to sit and copy paste answers which mislead people into thinking they are credible.

ChatGPT answers should either be against the rules or required to list their source. There is no reason ChatGPT answers shouldn’t be required to inform people of their origin.

9 Likes

It is a problem though. People post things that are completely misleading and false that come from ChatGPT. It’s a waste of everyone’s time to allow people to take 4 seconds to post a 5 paragraph answer that’s completely wrong and confusing for people who aren’t knowledgeable.

The answers really aren’t a help. ChatGPT can write code fine, but it can’t explain things. For example, ChatGPT just makes up API. Like, all the time. It even makes links that go to the docs for API that doesn’t exist. Code answers are helpful sometimes, but text answers virtually never are.

It’s painfully obvious when people use ChatGPT:

  • The answers are way too long and have weird grammatical structures that are complicated, obtuse, not concise, and sometimes wrong.
  • Over half of the reply is completely false in such a stupid way no human could actually do that (example: making fake API, nonexistent links, sentences that make no logical sense, taking API the OP mentions and saying it does something it clearly doesn’t (example: setting velocity applies a force for a set number of frames))

Literally ChatGPT answers are never solutions. 25% of my posts in scripting support are solutions. Less than 2.8% of ChatGPT answers are solutions. People just spam them.

People can post over 30 chat gpt answers in a hour. It takes way more time to moderate these posts than it does to post them. That is a problem.

The problem is that ChatGPT is designed only to make answers that “look” right. This means the answers are usually wrong but they look very plausible. For example, they’ll use functions that don’t exist, but people can’t really know a function doesn’t exist without checking.

Because it doesn’t. People post dozens of copy pasted replies from ChatGPT and from what I’ve seen from accounts that only post stuff from ChatGPT virtually all of the replies are beyond useless: they are completely false and made up but appear to be legitimate.

This. ChatGPT is fine at generating code (plus or minus some made up API, typos, and syntax). When it generates entire replies, it mainly just makes them look correct and BSs the rest. Usually about 70-80% of the content of the reply is wrong or even completely made up.

At the very least people should be required to note when their entire reply is from ChatGPT.

5 Likes

I think you’re misunderstanding my meaning. I’m not saying it wouldn’t be an issue if it were happening, but I am not a fan of pre-emptive changes of moderation policies when it hasn’t been an issue on this forum in particular.

If and when a not insignificant number of people start doing this here, though, then I agree it should be banned.

2 Likes

I stopped using the forum years ago, spamming AI responses is just another reason to not use it… please stop.

2 Likes

That doesn’t justify that you should be using AI to solve users problems especially like a category like Scripting Support. People who create topics in that category hope that a more experienced scripter can know what is wrong with their code and point it out.

If you feel like you would rather use AI to fix users problems then actually fix them yourself then your lazy and shouldn’t be helping people with their code. @ValiantWind has a good point

Just because you can doesn’t mean you should

3 Likes

TL:DR The short answer is yes ban it, people don’t want ChatGPT answers on the DevForum. If people wanted an answer from an AI they would have asked the AI themselves instead of asking the DevForum.

Please, there needs to be a ban put on ChatGPT responses. A bunch of new users are flooding #help-and-feedback:scripting-support with these AI assisted answers. The issue is in most of these cases the OP has more knowledge in the question they are asking than the people providing these answers. Which just leaves a bunch of these muti paragraph responses that at first glance seem useful but most of the information is outdated junk or just provides a response with lots of word fluff. I don’t care that people use ChatGPT, but the issue is now developers with seemingly no knowledge of LuaU or Roblox’s Engine API are trying to assist newer programmers through what ChatGPT provides them.

And for some reason these people justify their doings because they have gotten a handful of liked messages and a few solved problems (Trust me the 4 solutions and 20 likes to me is basically pointless, look at my profile if your here to just flex stats). But it’s an awful metric to be using since inexperienced programmers might just like your message because it looks right.

If you want to help people in #help-and-feedback:scripting-support that’s great! But please learn LuaU and Roblox’s Engine. Lua is like ice cream, it comes in a lot of flavors. Every application that uses Lua ends up making its own flavor that comes with its own additions such as built-in libraries, basically making a custom version of Lua, but they leave out the details and just say they use Lua. They all come from the same ice cream base but you would not say chocolate and vanilla ice cream are the same. The issue is ChatGPT is making vanilla ice cream and trying to sell it as chocolate ice cream.

7 Likes

I agree. I personally detest seeing responses that are AI-generated, mostly due to the fact that they are often incorrect, and I believe this is what most people are saying as well. If the person generating AI responses can ascertain the information is correct, that’s fine, but the person who is posting this information should know if their information is correct and that their code samples give the correct result before posting it.

So I guess my suggestion is not to ban AI responses entirely, but to disallow incorrect information. However as people have raised before in this thread, it’s going to be more of a burden on content moderators to dictate whether or not the information is correct, so I think outright banning AI responses is an appropriate solution. Doing this will also clear up any confusion people might have regarding the responses if that makes sense.

If people want AI responses they can go to chatGPT and generate the responses themselves.

5 Likes

The reason I want to ban chatgpt for answers and complete resources is because some people Actually use chatGpt to generate responses instead of putting effort and then having no remorse unlike other forum users.

Yes, I get that its hard to tell if its AI generated when AI gives correct info. You can try bruteforcing prompts and see if it has the same response.

3 Likes

I had someone respond to one of my posts that i made asking if am idea I had was possible, and someone called them out on it, apparently there was a lot of errors or just messy code . I didn’t even know the difference because im still a beginner at coding.

I agree with this post, because people could turn to that and paste it in a reply to someone and that person being newer to it might not know and try to use it. In my opinion AI generated stuff is bad sure it may help, but it can make people lazy, and reliant.

I don’t know about this chatgpt until just a few days ago from a post

1 Like

Helpful usage of ChatGPT should be allowed but it should be marked that it’s made by AI.
Unhelpful or otherwise problematic ChatGPT posts should be banned.

2 Likes

I’m changing my view on this - I’ve flagged two obvious (and unhelpful) ChatGPT responses in the last day.

From what I’ve experienced, this is becoming a not infrequent occurrence and is both an abuse of ChatGPT and of this forum’s standards. Confidently written technobabble is not a substitute for human conversation, particularly when hundreds of people might come across a factually inaccurate or misleading response even after the thread has died down.

I believe there should be an announcement made prohibiting the use of these tools unless clearly contextualised as an AI answer, as others have suggested. It should be entirely prohibited from the Support or Announcement threads because they already have enough factual errors.

11 Likes

scripter costs money chatgpt is free

I don’t think that would be a good reason to use chatgpt over scripters even if it’s free because scripters tend to know what they are doing. (Depending on their skill level.) chatgpt is imperfect and makes some mistakes so I wouldn’t use ai over scripters right now.

1 Like

The problem is with time it’s only going to get more difficult to identify an AI response from a human one. This steps into dangerous territory because what if genuine human posts are mistaken for AI and they get punished?

I think we have to ask ourselves… why are people using AI to make posts in the first place?
What’s in it for them?

Maybe what we really need to do is make the DevForums less “competitive”.

Sure I like that we can like someones post to show appreciation and that it notifies them about that (we should definitely keep this).

I believe we need to get rid of statistics on profiles such as:

  • Posts created
  • Likes received
  • Likes given
  • Solutions

These unintentionally create competition. Meaning it will encourage some users to do whatever is necessary to increase these statistics at whatever cost.

So how does AI play a role in this? AI responses are quick to generate with an increased chance of receiving a like or solution.

I believe having stats such as these overall decreases post and solution quality by encouraging through rewarding mass post creation.

DevForums shouldn’t act like a game. We really shouldn’t be rewarding users with badges and increasing stats as this just encourages undesirable behaviors and overall makes the platform less professional

I’m not implying that all AI responses are because of stats, but we certainly should be questioning what systems might be encouraging it.

1 Like

This is why I posted this post early. I knew it was gonna be a problem soon. Build the walls before they attack. Not after.

And I agree. But if that tag “By A.I.” is added to a post, it cant be marked as a solution unless its just psuedo code.


Very smart. But I havent seen a person look at those stats and take it like “man this person is good at solving problems”

Until AI gets better at adding normal sounding perplexity into their sentences, we’ll be able to tell.
Humans do not talk as predictively. Full stop.

I think it’s important to pay attention to burstiness if someone posts something that has low perplexity.
The perplexity just measures how well a language model can actually predict the next word in the sentence, but burstiness measures how much that varies. Normally burstiness is super low for AI generated content. Like, super super low.

That being said, humans are the best detectors of things like this, so there isn’t any need for those.

That AI detector is only unreliable when given a small amount of text. Humans don’t talk predictively and all those detectors do is see how predictable the next word in a sentence is.

But I agree, we shouldn’t use them, the best detector of stuff like that is just us. We can just tell, by reading something, if it was AI generated.

The code doesn’t work properly.
image

Besides even if you removed said line of code it really doesn’t fix the OPs issue very well at all and has nothing of value.

Oh didn’t even realize that, but otherwise it would do what OP wanted to an extent.
They just wanted to make it so you can’t control your character and first person doesn’t rotate them, and .AutoRotate does that.