This has not been resolved and has intensified noticeably within the last three days, where adding a top Toolbox model (such as HD Admin) within a newly created experience results in a very high chance of having your account moderated for ‘Sexual Content’ as soon as Publish is hit:
As soon as you hit publish and visit the Roblox website, your account is banned for ‘Sexual Content’
The automated action by Roblox’s moderation creates a great deal of stress for the teams running these applications (such as HD Admin, Kohls, Adonis, etc) as it’s leading to large influencers actively telling others not to install them out of (completely fair) fear of having their accounts banned:
If there’s code or items within our models that are triggering these automated bans - can you tell us what these are?
Can you set up stronger communication channels for top applications/plugins like you do for game developers? Our applications are running in the largest games on Roblox, with regularly over 500,000 CCU and 25 billion place visits, yet we still have no channels to communicate with Roblox staff when issues such as this appear
I have actually published my game (in 7 different places, including the starting place) with HD Admin and TopbarPlus a few minutes before I found this post, and I didn’t get banned/warned, nor my game got deleted which is strange. I’m not really sure why Roblox is doing this, but this isn’t good
Edit:
I’ve realized that he was talking about a newly created experience, not an already existing experience that existed for a year with HD Admin. But, I’m not sure if this applies to newly created places
I used HD Admin before and also didn’t get banned but i found something out. Roblox seemed to only take automated action if you aren’t “trust worthy”. For example, i tested this on my test account which contains basically no items and i was instantly warned. My guess the same happens if you just create a alt account
I attempted to appeal (Like i mentioned at the previous topic), and i received this:
It’s strange as well that this model (Adonis) got updated on 24th (and its MainModule too). I think that HD Admin and Adonis’s previous versions contained something that warned / banned accounts. It’s the only explaination to these warnings / bans.
This may be flagged as “Sexual Content” as well (because it may flag it as a racism), because when you say an N-Word in the chat and get banned, the reason is “Sexual Content” as well:
Hi again. I compared the amount of bytes in the “Settings” module script in your place and the current Adonis module (updated one), and found out that it got updated. There’s only one byte difference, but it’s suspicious anyways. The script has a lot of text, so developer might add a bad word inside of it at any time and then just remove it. Here’s the result:
… Do you know how much data a single byte can contain?
Whatever the issue is, it’s definitely something wrong with Roblox’s systems and it’s become increasingly frustrating that they still have not resolved it and barely addressed it while threads like this continue.
I wouldn’t normally care enough to reply to something that should appear obviously wrong to anyone reading it, but trying to insinuate we’re intentionally adding words that will get people banned is where I draw the line. That doesn’t benefit us at all, and the system is entirely open source and all commits are logged by git. These are just harmfully asinine shots in the dark that indicate to me you don’t know what you’re talking about while fueling misinformation, which in turn diverts attention from the actual problem.
The issue was replicated previously in an empty game with neither system in it. It’s something wrong with Roblox. You’re misdirecting blame for an issue that has nothing to do with us.
Hey folks, thanks for flagging. We’re looking into what could be causing this to come up again. From the Creator Store side we don’t see any issues, I’ll see what else could be going on. Hang tight!
Are you guys publishing these on places with the content questionnaire filled out? I recently encountered a similar issue with an automated moderation action and I think it’s to do with that instead