This is just AI, i mean if it was so bad why would Microsoft invest in it?
After a bit of observation, I’ve changed my take on this
I support this. ChatGPT should be banned for answers and resources.
I’ve seen lots of “tutorials” created on the DevForum that were basically copied and pasted from the chat bot. Same with scripting support answers. Or sometimes just replies all together.
Not only is it evidence that you don’t have an interest of helping others out, its just plain out spam at this point.
I won’t say much since the others in this thread covered the points I wanted to make, but to the people saying that it could be useful in some ways:
There are more cons than pros when permitted ChatGPT. Not only does it create lots of spam, but copying and pasting scripting support answers is shameful. It shows you have no interest of helping others and that you just want to inflate the stats on your profile.
Here’s just one of many examples of members generating answers:
Literally a copy and paste.
It’s easy to tell the difference between a human written answer and an answer generated from ChatGPT. Several reliable tools have been made to detect whether text was generated by ChatGPT or written by a human.
It would be a great decision to ban the chat bot. It’s going to get out of control eventually. We need to ban the bot before that happens.
Because of the amount of potential it has.
It’s not that simple.
Also, I can’t see Roblox ever buying the ChatGPT.
My view is that AI responses should be done responsibly, and not to replace human interaction.
For example, if we’re discussing ChatGPT as the topic of the thread, and someone quotes a ChatGPT reply as context for their discussion, that should be allowed. In any other circumstance though, it isn’t appropriate.
The rule in my view should be:
- Is an AI response contextually relevant? For example, are people discussing in Lounge / Dev Discussion the impact of AI on code completion then it should be allowed.
- Is the response entirely generated without any human intervention by a user? For example, many users will use GitHub Copilot to enhance their coding, and in my view that’s different from writing post content on the forums. However in any case, it is the authors responsibility to ensure the accuracy of the code they’ve posted.
The difference between ChatGPT posting a reply and a human posting the reply is a big one.
If someone wanted an AI to respond, they can go to OpenAI and ask it themselves. The fact that people are posting here implies they’re looking for human assistance to a problem.
Likewise, ChatGPT has no understanding of Roblox past what it can scrape from documentation, much of which can be inaccurate or outdated. It is unable to solve many novel problems people might have, but it’ll always respond with a confidence that isn’t representative of that fact. Essentially, ChatGPT isn’t just often wrong, it’s also always confident that it’s right. This is behaviour we don’t except from people, let alone an AI.
Just a little add on:
Here’s just one of many examples of members generating answers:
Literally a copy and paste. People like the one shown above have no interest in helping others and are instead just trying to farm solution counts on their profile.
It’s easy to tell the difference between a human written answer and an answer generated from ChatGPT. Several reliable tools have been made to detect whether text was generated by ChatGPT or written by a human.
It would be a great decision to ban the chat bot. It’s going to get out of control eventually. We need to ban the bot before that happens.
I’ve edited my reply to include this.
Calling me out is rude and against the TOS. My goal is to help the community. What is your goal with banning the chatbot?
Chatbot answers can be extremely helpful.
I have 3 solutions that I used the chatbot to help generate code for:
If you have problems with me or how I have been helping the community message me directly. Creating a thread because you don’t understand the future of coding is wrong.
wait actually? ai is getting advanced enough to write code in lua U??
Not all of the code it writes in Luau is accurate or even functional.
No, I’m not kidding that actually worked perfectly. I had to convert it to be serversided to work with my admin script but overall it worked nicely.
Yes, however, as I stated, not everything it writes will be perfect.
Correct, yes, but “helped generate code” is a key point here, its easy to play with some errors and fix the code to be used correctly. lua u isn’t that much different from regular lua.
I mean at that point, you should write it yourself
Not always, I’d rather fix errors than write code. If you have experience than its easy to just chip off a few errors
Most coders have a repository of code snippets they use to do things. Trying to re-write code over and over again is in-efficient. In fact I myself have over 30 different code snippets that I will adapt towards whatever I’m working on.
If you are re-writing code that you wrote in the past you will be slow. Time is money my friend and anything you can do to speed up the process of creating code for your customer is all that’s really important. Do you think the customer cares that you spent 1 hour to code something using a chatbot vs. 20 hours coding the same thing without the chatbot?
I guarantee you the customer only cares about the speed at which you accomplish the task. Chatbots are the future of coding. The quicker you can get on board and figure out how to use them by creating better inquiries the more successful you will become.
What does reusing code have to do with ChatGPT though? You can just save your code somewhere and copy + paste it somewhere else later.
Then the customer can just use ChatGPT instead? You’ve pretty much just said that the bot can do the same thing as you.
Not quite, chat bots will just become more used since you really don’t have to pay for them.
The point of the OP is that using ChatGPT for replies here doesn’t make sense since since you’re not really putting effort or doing anything for your posts (they’re just AI generated).
Someone can just use ChatGPT themselves if they truly wanted to see an AI-generated response.
You assume that everyone who is looking for help knows about chatbots and knows how to create inquiries that will provide them the correct answer.
As has been pointed out, the chatbot is a very new technology and a lot of people don’t fully understand how to get them to work. If a person who is not familiar with Luaa or coding tries a simple query they will say that 9 out of 10 times the chatbot doesn’t give them what they want.
On the other hand, if I convert what they want into a query the chatbot can understand better I can produce quality code that (so far) more than 80% of the time solves the problem.
Again, if I come to these forums it’s because I have a problem that needs solves. Do most coders care if the answer is AI generated, even if that answer solves their problem?
Are you trying to talk yourself into believing that just because you made a robot write something for you means you made it?
You solved nothing, you merely posted what an AI told you (likely without even verifying that the answer was correct).
There’s giving people bad advice by accident because you genuinely didn’t know any better, and then there’s just outright negligence from just copy+pasting whatever you get told by a computer.
You shouldn’t be proud that you’ve spouted so much misinformation, especially misinformation that you didn’t even create.
The difference between you and them:
They tried. Like actually tried. A human was behind that answer. Not a bot.
Unlike them, you gave a solution, an incorrect one on that, with no effort. Didn’t even bother to see if it was right. Giving incorrect information is wrong on so many levels. If you must use the bot, make sure you aren’t providing outdated/incorrect information.
Honestly I’m fine with ChatGPT for answers as long as it’s not creating unhelpful feedback. If you are checking your answer or adding a disclaimer it’s made by AI (and you’re not replying to future problems, for example “fixes” that fixed the old problem but made a new one), sure, go ahead. I don’t care. The problem is lazy people are saying “oh this person needs help welp time to go to ChatGPT”. Spoiler alert: It’s not helping.
What I’m noticing here is that many people are demanding ChatGPT responses to be banned, but yet they’re not actually providing any solutions on how to make that a reality.
…Like, how exactly are you supposed to tell the difference between a person and A.I in terms of typing answers on an online forum?
Because literality anyone that uses correct grammar and proper punctuation can easily look like an A.I generated response aswell, which is exactly why they shouldn’t be moderated.
This kind of thing can be very abuseable once you allow faulty A.I detectors to carry out moderation actions towards players, meaning that anyone that uses DevForum would be put at risk.
I understand being annoyed at ChatGPT responses, but Roblox just isn’t responsible enough to handle an issue like this yet, since they can’t even moderate the DevForum properly on weekends or nights in their current state.
So the best thing you can do is just flag any answers that either make no sense or are just wrong entirely, because more faulty bot moderation is the last thing we need right now.