Response to code safety review discussion

Asset moderation in general on ROBLOX is insane. Developers are jumping through huge hurdles to be able to even have the opportunity to be successful. People have brought up the main issues with this already, but I think that ROBLOX needs to do a complete overhaul of asset moderation in general; code, graphics, audio, everything. The system does not work for the developers, the most important people on the platform.

6 Likes

Although this does answer some questions and concerns. I am still curious as to what happens if us developers have webhooks running through Discord? I have 2 webhooks that show me who joins my game and another that shows me if any errors/bugs show up in output in servers I’m not in so I can live fix them. Are webhooks at risk of being flagged?

3 Likes

I doubt it, but I’d say you’re more at risk of being flagged by Discord

5 Likes

Another user created a filter for his game that censored out ROBLOX scam links and bypass swear words. However, he was banned/term’d for doing so. He’s still trying to appeal. I don’t recommend setting up your own chat filter at all. I’m afraid we’re just stuck with the current faulty one.

5 Likes

Well, the main focus on this thread is that a guy that had a team create got his account terminated because another guy that had access to the game put a bunch of inappropriate assets and reported the game.

But that is also a good issue to talk about too.

1 Like

This process was put in place to identify and prevent malicious activity on the platform and is intended to stop such activity without disrupting legitimate developers.

I’m hoping this also extends to free models. If this does not then this isn’t worth resources at all.

We’re looking to flag content that’s dangerous or harmful to our community, not find swear words in scripts that would never be seen by players.

I’m glad that got cleared up.

One of the biggest concerns we observed was around code privacy and protecting personal keys.

Well of course, everyone was under the impression that other people would view these private scripts. Alot of that discussion could of been halted if you were to reply in the original thread clarifying that. Unless you actually did this and just now changed it to that.

Our automated review system looks for malicious behavior in code. In the rare event that code gets flagged by the system as a potential safety concern, a very small, specially-trained team goes in-game as players to check it out.

We still don’t know who the “specially-trained” team is and how they qualify to provide unbiased and precise judgement.

In certain cases, parts of the game’s code may be manually reviewed by that team, who will check to see what the specifically concerning code does. We have strict rules in place determining when developer code can be seen and this is only done in the context of platform safety concerns.

I thought this was suppose to be for things that can only be seen in game? What fits the criteria for a team member to directly access the games source code for manual review?

We generally do not consider Team Create sessions to be shared content, these sessions are private; however, if someone with access reports offensive content, we will investigate the author.

Alright here are my set of questions.

  1. Who do you refer by author? The person that created/inserted the reported assets or the game owner?

  2. So does that mean you access the game directly through studio or investigate through in-game means?

  3. When moderation has been taken place how do you confirm that you have the right guy?

  4. To ensure that moderation isn’t solely held responsible by the game owner and only the person who inserted these malicious/inappropriate assets do you tag created/inserted items by the creator/owner?

2 Likes

If you live in America, this is the Roblox version of the Patriot Act. It is the best course of action in light of whatever darkness you can still manage in Roblox sandboxed Lua.

1 Like

I will be fine with people moderating my code as long as one thing happens…

We know exactly who is moderating our code. Full names, not usernames. When you say a “special-trained team,” that isn’t enough information. I do not want someone random looking at my code. As long as I know who they are, I am fine with it.

If this isn’t provided, I will not develop on ROBLOX anymore.

4 Likes

No, you’re misunderstanding what malicious means, it’s not code that isn’t functional / destroys the game. You should think about code that inserts inappropriate content into the game under certain conditions. That compromises player safety because they are subjected to the inappropriate content (i.e. NSFW, bigotry, etc).

6 Likes

That’s absolutely stupid. I don’t want Roblox engineers to get harassed because they’re doing their jobs protecting players from getting into highly inappropriate NSFW/nazi/etc games.

18 Likes

It’s stupid that I want to know who’s looking at my private code?

I think it’s very reasonable.

3 Likes

I have a solution for that - It’s called the reporting system. If that can be fixed, then there is absolutely no reason for this announcement/update.

2 Likes

While I disagree with how it was phrased, @ChasingNachos has a very valid point. Moderation teams need to be completely transparent in how they operate, otherwise, the community has no trust in them. For proof, just look at the real-life debate over the FISA courts.

Briefly, FISA courts were created under the PATRIOT Act to allow for foreign surveillance… with very loose definitions on the term. What makes it worse is that their proceedings, procedures, motions, organization, everything is completely closed off to the public. Try pulling a FOIA request on FISA if you don’t believe me.

ROBLOX moderation should NEVER be this closed off. This is a game platform played and developed on by a mostly U-16 crowd. The implications of an angry teenager with programming knowledge having their game removed because of the mysterious process we have little to no information on is very, very concerning to me, both for them and ROBLOX.

You bring up valid points in harassment and where they are mostly targeting with this system. I agree that they shouldn’t get harassed, but that’s completely different from the subject of the topic. Harassing ROBLOX staff is grounds for termination(in my opinion); it’s a non-issue to me. Worse comes to worst and serious threats start showing up, legal action can be pursued. This isn’t as much of an issue as I think you make it out to be, but I acknowledge it is an issue nonetheless.

Further, when you say they’re focusing on targetting NSFW content, is that something that can be focused on from a coding perspective? Isn’t that more of a content issue? I admit I am not a programmer, but as far as I’m aware, if there isn’t content to be posted, wouldn’t any action taken by that code to show said non-existent content be mute? I’m not a programmer, so I don’t entirely know, but it seems to me that it would be far better to focus on content and not code.

I will reiterate my previous suggestion: revamp the entire asset moderation system. It would solve all these issues, and more, if the system was less focused on censoring ridiculous words such as “water” or “beat,” rejecting audio files that include a new VST that the system hasn’t seen, and terminating accounts without warning.

Don’t mean to come off as rude at all, if I do I sincerely apologize. I simply can understand everyone’s frustration.

12 Likes

Except it is not. That is why many posts are posted under the Roblox account – for the safety of the author. The exact same with moderation. I personally disagree 101% with this moderation but moderators should always be anonymous. Imagine a little kid getting moderated and he could see who moderated him. He’s probably gonna threaten him and insult him or etc.

We don’t need identity of moderator. We need more transparency. Roblox staff has been very lacking with transparency and that’s really all we are wanting.

4 Likes

Reporting systems are all fine and good for the things that happen live (like in chat) but it means that someone already got exposed to the bad content.

It’s like reporting that a nuke went off, rather than restricting the sale of materials needed to construct a nuke.

The reporting system does work, contrary to what you’re saying. It may not provide feedback and there’s certainly room for improvement but it isn’t broken. I’ve seen players get removed out of a live game minutes after being reported, multiple times.

Roblox has to take precautions to ensure that NSFW/bigoted/illegal/etc content is removed and never seen by players. If you have a better way of doing this then I’m sure everyone would love to hear it.

Also, this is an automated system. No moderator does anything until something is flagged, so it’s not a waste of time. Yes, there may be some false positives, but I’m sure we’d all much rather have that as opposed to things slipping through the cracks.

5 Likes

I do have one more curiosity.

If this moderation process works the way it does, how come there are people being instantly terminated for stuff in their private development games?

For example, as mentioned in other posts, being terminated within minutes of inappropriate stuff inserted by a team create participant, or being terminated for having script text inside a private game no one else should have been able to access.

This is the process as I understood it:

With those other posts talking about bans that seem to have deviated from this process, are there other moderation bots inspecting games and their scripts?

I’d hate to be banned forever for a single mistake, by a moderation bot of all things. That would be a horrible way to go.

8 Likes

Additionally, how could moderators investigate reports? Let’s say a game shows inappropriate content to all users bar admins, or only a specific group.

In situations like this, engineers would need to investigate the code to resolve the report.

Replacing the automated flag system with a report system is not a solution to the issue @ChasingNachos raised.

4 Likes

This is certainly better, but we’re still missing a bit of information, like what about API keys, obfuscated code etc.

1 Like

This too could be abused. For example, put profane words in a key, and use them for a chat message, instead of being an API token. Although I do agree with the feature request, it’s not failsafe, and I would understand Roblox not wanting to completely hide these keys.

You could check the keys usage though. If it only is used for a https request vs sending to all players, you would be able to tell.

Additionally, there are already ways to accomplish private keys using httpservice and a database or server. The point is, it’s already possible so it’s not like this adds more security issues, as it puts it under Roblox’s own servers.

The use case which you provided can already be done without a key service, so how does adding one make that risk worse?

2 Likes