Ban chatGPT for answers and resources

I think we’d still need a rule to have a flair for all AI generated content so we could more easily differentiate between AI and non-AI generated content. Something like this

This post includes machine generated content

Also on the forums #development-discussion and #updates (expect on #updates:community ) AI generated content should not be allowed because by design these forums ask for opinions which AI cannot by design have.


In support categories, users want best-practice solutions – not merely opinions!

It is totally fine for folks to use AI tools as writing assistants to save time or get something across better if you are not as proficient in English. Folks just need to make sure that if they do go down this path, that they have enough expertise to validate the content of the post and that it contributes significant new knowledge to the thread, to avoid posting misinformation.


i see two things one your saying to ban it becuse of the code or people asking question but i have done some testing with it and most times if asked wrongly it not 100% the question you want it to give the answer too the code does break easy. and i suggest to look into what roblox has posted about generative ai on roblox our vison for the future of creation which was posted on feb 17th by the chief technology officer but i do see it both ways tho and with that suggestion please go ahead do research on microsoft and bung becuse they are dumping money into making there own ai that is suppost to be exactly like chat gpt if not better which from what ive read they said it would be using the bing engine

Sure thats understandable.
However the problem is that on #development-discussion and #updates:announcements people are supposed to tell their own opinions, not talk about factual information.
What if someone asks an ai “Whats your opinion on X”, “Why is X better than Y”, “Should Roblox do X, Y or Z”, “How have I done X” or “What did you script today” and they use the AI generated response on there?

And another thing is that having a rule to add a flair to distinguish that the post is made by AI would be very usefull.
I see in no harm requiring a flair (which should be easily clickable with a button to add to your post).

But yes I do agree that allowing AI content on other places is totally fine if it’s marked as such.

I already responded to these concerns above: there is no change in the fact that users must not detract significantly from topics when posting.

We will not be marking posts with labels on how it was written. This is not future-proof to determine (it’s already hard right now, will be impossible in the future).

The bottom line is that users must not detract from topics with their posts. How the post was written is not relevant in assessing whether a post contributes to a topic or whether it contains misinformation. Users are responsible for their own posts.


But the problem is that most AI generated content doesn’t exactly break any of the rules, yet it’s still an issue due to it’s quantity.
Most of the AI generated posts contain a lot of filler text, ie. the AI is too verbose.
Yet the content in it could be explained in much simpler words.
This is not an issue on it’s own but when a lot of this content exist then it starts becoming an issue.

The main problem is not that people are using AI to get their message across or enhance it. The problem isn’t that people use AI as an assistive tool.
The problem here is that the AI is the one behind the message. The people posting the message don’t really care about the idea conveyed within, they just wan’t to postfarm.
Sure the messages themselves aren’t really bad, but they aren’t good either. They just cycle the same stuff over and over again.

The problem really here is postfarming. There is an incentive for members of the Devforum to post stuff and rankup. We either need to kill the incentive to rank up or something else needs to give in.
Something needs to change because this is a huge issue. It’s not some theoretical issue, it’s a practical one.
Its very practical because it’s an issue faced on the Devforum on a big scale. Just because it’s not againts the rules now doesn’t mean the rules shouldn’t be changed.

Also I think you earlier misunderstood my point about the #updates:announcements and #development-discussion. They are not support categories.
The support category is #help-and-feedback, it is not the #updates:announcements or the #development-discussion.

In #updates:announcements and #development-discussion people are not supposed to give help. They are supposed to give their opinions on certain things.
And here in the problem lies that people post AI generated messages.

A lot of people post AI generated responses here for the sole purpose to postfarm.
The AI gives usually very verbose conclusions, which on the surface sound smart, but in reality just convey very basic information which is not necessary in any way to the discussion, and does not enhance it in any way.

The problem doesn’t lie in with indivitual messages, it lies in the collective of many AI generated messages by different people.

I’m not saying my solution is the right one (probably far from it :man_shrugging:), but we need some something because the current system does not adaquately tackle the issue. We need solutions.

Just because a rule is not future proof to determine doesn’t mean it shouldn’t be added.
Time and the environment always change and so should the rules governing the forum change with it.
Both Robloxes TOS and Robloxes community standards have both have rules added, deleted and modified due to the situation and the issues faced by the platform. I really don’t understand why the Devforum couldn’t be the same.

Also the whole notion that the breakage of the rule is hard to determine should mean that the rule shouldn’t exist is a bit ironic because we already have such a rule.
It’s 17 Claiming others’ work as your own.
In fact this rule has been assigned the highest severity of punishment which is termination.
The only rule with an equal severity of punishment is of posting deliberately NSFW content repeatedly.

And no determining plagiarism isn’t (in most cases) easier than determining AI generated content.
Sure maybe for images reverse image search exist.
But for other types of content bypassing any means of detecting plagiarism is really quite impossible (unless the original creator appears and proves that it’s theirs or someone whos seen the original content does the same).

Just because determining the rule breakage might be impossible in the future, then so what, it doesn’t matter.
We don’t live in the future, we live in the now. And now we can have features to tackle it.

Also because the rule breakage is hard to determine doesn’t mean we shouldn’t have rules againts some behavior.
The mere existence of a rule deters people from breaking it. The more clearer the rules are the less likely people are to break them. This is basic human psychology.

I’m not advocating for a ban of AI content. I’m proposing solutions of the the issues we are facing because of AI generated content.

TLDR; My solutions are:

Again I don’t know if my solutions are the right ones. But something needs to change because day by day the issue is becoming greater.

1 Like

To be clear: there’s no “ranking up” on this forum. We are working on flattening the trust level differences down for the long-term, should there be any misconception about this with the community.

Unfortunately this is not the case. It is possible to determine plagiarism with extremely high accuracy because the original author reaches out to let us know that they did not consent to the content being posted, and confirms that they are the original author. There’s no analogue for AI-generated content outside of sparse tooling to “predict” whether something was written by AI or not.

Please private message me links of content that you think shouldn’t be on the forum that is actively hindering your ability to use the forum. I also asked some other people in the thread to do this and we landed on the conclusion that there wasn’t really a problem here. Happy to have a look at your specific examples as well.


Should suffice for a while. People aren’t making chatgpt rephrase yet, they directly hop on to post farming with putting the topics name as the chatGPT prompt.

Optimal method to use it was perhaps seek guidance from it with their response to give more comprehension, only if they understand it or is aware it is correct

Roblox needs to implement some form of limitation for AI answers. People reply with non-working, AI generated code and then do not respond when users tell them it’s broken.

I think AI code is fine, but I feel like users must make it work before needlessly posting non-working code onto a topic.

1 Like

There is literality a Roblox admin on this post who already stated that nothing like this is ever happening.

I understand that A.I responses are generally annoying to everyone, but we’re just gonna have to deal with it since we already have our final answer about this issue from the staff team themselves.

All we can do now is just flag any answers that are either wrong or flawed and leave it at that.

Thats not entirely true. There is an incentive to get higher profile stats (amount of likes, posts etc.) which does add credibility. It’s a defacto rankup. There are a lot of people who post on devforum to boost their status (or at least feel like they do), and use AI to do this.
Expecially a lot of noobs wan’t to get higher profile stats on the Devforum and post farming is a way to do that.

It’s understandable that specific rules shouldn’t be added. But something should be done to address the disparities.

1 Like

Well, @Hooksmith, I suggest you go look at:
“People using ChatGPT to ‘help’ (its going chaotic, being popcorn)”
That’s the best example, if you go towards the end it was only trolling, because of ChatGPT, and the person which got banned (the reason the pair was created) kept giving out false information, which is something that here shouldn’t exist, if it’s a human error, it’s normal, but because he kept spam-answering with wrong information he filled the DevForum with spam, isn’t taht enough?

Generally speaking, and as mentioned before, this is a content discovery problem. We have some gaps to fill related to making it easy for you to find content you are interested in and that is actually helpful.

Currently, all topics and posts are sorted chronologically no matter what, which brings about these issues because we have a very broad community. This overarching problem is causing more issues than just the one you’re pointing out, namely, that it’s hard to find content that appeals to your current skill level or content posted by similar peers (same age group, same skill).

Again, we have rules already that posts must contribute to the topic. Even if it were practical to institute rules against AI content, the underlying problem you point out would not be resolved.

The product feedback on content discovery is well-known internally.

Could something still be done to attempt to even address the afformentioned “rankup issue”?

As stated multiple times throughout this thread, if someone is posting to a thread and is detracting from said thread, you can flag the post for breaking the rules. Report it as off-topic or use “Something Else” to provide more information.

I think you misunderstood my point. It does not address the root of the issue.

The root of the issue is that noobs have an incentive (or at least a psochological incentive) to rank-up (gain likes, badges, views etc.) on the Devforum.

The issue isn’t the rewards themselves, but before AI there was an automatic barrier of entry to gain them. Now such barries are removed.

TLDR; Noobs wan’t to gain social rewards on the Devforum (likes, follows, badges, views, social status) and thats the issue. We need to somehow address the root of the issue, not the symptoms/byproducts that the noobs produce to achieve this goal (and not all of it counts as rule breaking). The problem is that Devforum becomes (for the noobs) an end itself instead of a means to an end


If the posts do not break our forum rules (i.e. are contributive to the topic), irrespective of the motivations of the poster, I am not sure if there is an issue here.

Please private message me specific examples of posts that were not taken down by moderation after flagging that you think fall into the category of posts you are talking about.

1 Like

Ok I guess thats a fine conclusion to the topic as we probably aren’t goind to find solutions.
My main problem is that they are on the low bar side of acceptable (ie. bottom accepted usefullness with good but largely meaningless). They do anwser the question, but very barely (and if done without AI would have been much better).

Also it’s still a bit questionable that people post an AI aggregated opinion on opinion topics (which by design should be an opinion of the poster, not an aggregated text).

Anyways I don’t think we’re going to find much common ground here, but anyways thanks for anwsering the question and have a nice day :wink:


The thing is you can get inaccurate information not only from ChatGPT but also from a human. Now the odds may be different to an AI error and a human error, but if you just quickly read over what the AI generates then I don’t see anything wrong with it.

We adjusted to calculators to do math(s) sometime ago, and it’s time we adjust to new technology to do different everyday things like ChatGPT.