This is a 50-50 condition, but I agree. People that use ChatGPT to answer questions usually do not know the content in the first place and it is also proven to be inaccurate a whole load of times.
I 100% agree. It gets annoying when someone just copies and pastes answers from ChatGPT and not even check what was generated (most of the time). It’s unreliable, and something needs to happen.
Late response
As I said, it’s just a help, it’s the user’s responsibility to look for what he doesn’t know, after all it’s on the internet lol
It’s an AI trying to explain something, ask a person if you don’t like to read so much.
AI learns from the internet (in general), of course it needs more training, we are not yet in a future where it will be perfect, but it is a good advance.
Again it is a HELP, not a solution, the AI learns and for a reason you can report the answer if it is incorrect or tell the AI that the function does not exist.
So, just because the AI is learning, it should be banned? What is the difference between this AI and a person who is learning? It is practically the same, both can give bad answers according to their knowledge.
It is absurd to want to ban an AI for the misuse of its users, and again, the same page warns you of bad answers. AI should continue to be trained, not banned because it currently gives bad answers, it is inevitably the future that awaits us
If you can find a spammed ChatGPT programming answer that is helpful then I’d be surprised.
The forums rightfully don’t block people for posting incorrect information because they assume someone at least sort of tried to post something correct. ChatGPT is wrong virtually all the time because the probability of an error compounds.
That’s exactly what the forums are for: asking a person. They aren’t for people to spam with blatantly incorrect information. It’s an AI trying to explain something it has zero understanding of.
This also isn’t true. ChatGPT 3 (the one people are using) will not learn from data.
ChatGPT 4 will be trained with the data from this beta phase, but that will be almost certainly be paid. Children spamming ChatGPT answers won’t be paying for ChatGPT 4, so this isn’t very relevant.
- The AI isn’t actively learning. It does get specific updates to fix obvious problems.
- AI learning is completely different from the learning of regular people (a source). This is a common way to explain ai/neural networks in very simple terms but shouldn’t be taken too far (because it’s completely not true).
- To the best of its knowledge:
To answer this, the difference between a person’s answer and a ChatGPT answer:
- People who don’t know write that they don’t know. They don’t just say false information with 100% confidence. Usually the OP in help questions doesn’t know very much, so they often confuse this confidence for knowledge. It’s lying to people in multiple ways.
- ChatGPT answers are wrong (at least for coding questions) about 95% of the time (based on data), but they can be produced rapidly. People spam the forums with ChatGPT answers that are completely incorrect and waste people’s time.
If you understand a problem it’s faster to just answer it than to either get answers from ChatGPT until they’re correct or edit all of the false information in a ChatGPT answer.
I don’t think you understand what the topic is about: it’s suggesting that ChatGPT should be banned from question sections of the forum. Anyone knowledgable enough to answer the question wouldn’t use ChatGPT, so the only people using ChatGPT are the people not knowledgable enough to correct the answers (which would be a waste of their time because the answers are fundamentally wrong usually).
People can still use ChatGPT if they want help from it, but people shouldn’t spam the forums with nonsensical information from ChatGPT and force that on to other people. If the person asking for help wanted help from an AI they wouldn’t go to the developer forum, they would go directly to the AI.
This is just AI, i mean if it was so bad why would Microsoft invest in it?
After a bit of observation, I’ve changed my take on this
I support this. ChatGPT should be banned for answers and resources.
I’ve seen lots of “tutorials” created on the DevForum that were basically copied and pasted from the chat bot. Same with scripting support answers. Or sometimes just replies all together.
Not only is it evidence that you don’t have an interest of helping others out, its just plain out spam at this point.
I won’t say much since the others in this thread covered the points I wanted to make, but to the people saying that it could be useful in some ways:
There are more cons than pros when permitted ChatGPT. Not only does it create lots of spam, but copying and pasting scripting support answers is shameful. It shows you have no interest of helping others and that you just want to inflate the stats on your profile.
Here’s just one of many examples of members generating answers:
Literally a copy and paste.
It’s easy to tell the difference between a human written answer and an answer generated from ChatGPT. Several reliable tools have been made to detect whether text was generated by ChatGPT or written by a human.
It would be a great decision to ban the chat bot. It’s going to get out of control eventually. We need to ban the bot before that happens.
Because of the amount of potential it has.
It’s not that simple.
Also, I can’t see Roblox ever buying the ChatGPT.
My view is that AI responses should be done responsibly, and not to replace human interaction.
For example, if we’re discussing ChatGPT as the topic of the thread, and someone quotes a ChatGPT reply as context for their discussion, that should be allowed. In any other circumstance though, it isn’t appropriate.
The rule in my view should be:
- Is an AI response contextually relevant? For example, are people discussing in Lounge / Dev Discussion the impact of AI on code completion then it should be allowed.
- Is the response entirely generated without any human intervention by a user? For example, many users will use GitHub Copilot to enhance their coding, and in my view that’s different from writing post content on the forums. However in any case, it is the authors responsibility to ensure the accuracy of the code they’ve posted.
The difference between ChatGPT posting a reply and a human posting the reply is a big one.
If someone wanted an AI to respond, they can go to OpenAI and ask it themselves. The fact that people are posting here implies they’re looking for human assistance to a problem.
Likewise, ChatGPT has no understanding of Roblox past what it can scrape from documentation, much of which can be inaccurate or outdated. It is unable to solve many novel problems people might have, but it’ll always respond with a confidence that isn’t representative of that fact. Essentially, ChatGPT isn’t just often wrong, it’s also always confident that it’s right. This is behaviour we don’t except from people, let alone an AI.
Just a little add on:
Here’s just one of many examples of members generating answers:
Literally a copy and paste. People like the one shown above have no interest in helping others and are instead just trying to farm solution counts on their profile.
It’s easy to tell the difference between a human written answer and an answer generated from ChatGPT. Several reliable tools have been made to detect whether text was generated by ChatGPT or written by a human.
It would be a great decision to ban the chat bot. It’s going to get out of control eventually. We need to ban the bot before that happens.
I’ve edited my reply to include this.
Calling me out is rude and against the TOS. My goal is to help the community. What is your goal with banning the chatbot?
Chatbot answers can be extremely helpful.
I have 3 solutions that I used the chatbot to help generate code for:
If you have problems with me or how I have been helping the community message me directly. Creating a thread because you don’t understand the future of coding is wrong.
wait actually? ai is getting advanced enough to write code in lua U??
Not all of the code it writes in Luau is accurate or even functional.
No, I’m not kidding that actually worked perfectly. I had to convert it to be serversided to work with my admin script but overall it worked nicely.
Yes, however, as I stated, not everything it writes will be perfect.
Correct, yes, but “helped generate code” is a key point here, its easy to play with some errors and fix the code to be used correctly. lua u isn’t that much different from regular lua.
I mean at that point, you should write it yourself
Not always, I’d rather fix errors than write code. If you have experience than its easy to just chip off a few errors
Most coders have a repository of code snippets they use to do things. Trying to re-write code over and over again is in-efficient. In fact I myself have over 30 different code snippets that I will adapt towards whatever I’m working on.
If you are re-writing code that you wrote in the past you will be slow. Time is money my friend and anything you can do to speed up the process of creating code for your customer is all that’s really important. Do you think the customer cares that you spent 1 hour to code something using a chatbot vs. 20 hours coding the same thing without the chatbot?
I guarantee you the customer only cares about the speed at which you accomplish the task. Chatbots are the future of coding. The quicker you can get on board and figure out how to use them by creating better inquiries the more successful you will become.
What does reusing code have to do with ChatGPT though? You can just save your code somewhere and copy + paste it somewhere else later.
Then the customer can just use ChatGPT instead? You’ve pretty much just said that the bot can do the same thing as you.
Not quite, chat bots will just become more used since you really don’t have to pay for them.
The point of the OP is that using ChatGPT for replies here doesn’t make sense since since you’re not really putting effort or doing anything for your posts (they’re just AI generated).
Someone can just use ChatGPT themselves if they truly wanted to see an AI-generated response.
You assume that everyone who is looking for help knows about chatbots and knows how to create inquiries that will provide them the correct answer.
As has been pointed out, the chatbot is a very new technology and a lot of people don’t fully understand how to get them to work. If a person who is not familiar with Luaa or coding tries a simple query they will say that 9 out of 10 times the chatbot doesn’t give them what they want.
On the other hand, if I convert what they want into a query the chatbot can understand better I can produce quality code that (so far) more than 80% of the time solves the problem.
Again, if I come to these forums it’s because I have a problem that needs solves. Do most coders care if the answer is AI generated, even if that answer solves their problem?