This kinda gives me the same feeling that I get when someone uses my laptop for something, and I get really anxious, even though I don’t have anything to hide.
I’m not worried that people will find something bad in my code, I’m just worried that they’re even LOOKING in my code in the first place, even if its not open-sourced.
What ticks me off is that this post was made from the Roblox account and not a member of the staff. Which decreases the chance of our discussions here being seen by them to pretty much 0. And even if there is a response it’d be likely not what we all hope for.
After giving a day to let things soak in, I am ready to re-visit this topic with my two cents.
Considering this “Code flagged for safety review,” it seems like this development was made for a reason. Likely what happened is that someone gave out personal information that led to an extreme circumstance that we are unaware of, and now ROBLOX is having to deal with this mess accordingly. In retrospect of something that likely happened, it makes sense for this to be made as it is a branch of Roblox’s system. But the purpose of this is not to cause extreme filtering like chat filtering, but to lightly filter for the protection of the developers. I believe people are comparing this method of filtering equally to the chat filtering, but in reality it appears not. If ROBLOX has implemented this a month ago and none of us were aware of it, then that must mean that, as stated, the filtering is just built to protect the privacy and personal information of individuals. Now, what I am not happy about is that ROBLOX implemented this a month ago without telling the developers, but at the same time I cannot blame them because it is a very smart way to filter out those with negative intentions. If anything, Roblox can have every right to implement this without telling us about it, and yet they are telling us in the first place. Because we are uploading to Roblox’s servers, anything you upload belongs to Roblox and is their responsibility.
Would be nice if this flagging system also flagged games and scripts that are stolen. There are games on Roblox that were leaked on v3rm, like Murder Mystery 3, which is a leaked version of Murder Mystery X. They are making profit off someone’s hard work.
This update is honestly ridiculous and I’m done sugar coding, let’s dissect this update! To start, the announcement of this update could potentially be the worse Roblox has ever done. Roblox made this change and did NOT announce it because they knew it was going to be controversial and they wanted to avoid a negative response. They are only now telling us to be “transparent” but there is honestly no transparency in implementing a new system and only informing us months later. Another issue with this update is the fact that this is a ridiculous privacy invasion, I can nearly guarantee you that no developer would want their closed source code being read/reviewed without their permission. Of course, there are more issues, we should also consider the massive security flaw this system could cause. This could potentially lead to developer’s scripts being leaked (accidentally or maliciously) by either Roblox employees or hackers. Speaking of hackers and exploiters, the only way having foul language in script comments or variables could be seen by another user is if they are exploiting, we should not be trying to protect the eyes of exploiters. Another issue I want to discuss is, how will this affect old assets. Think about all the scripts written since Roblox has been released (2006), I am sure plenty of those scripts had a bad word in them hidden in a comment or variable name. Will all these old scripts be moderated? Will the writers of the scripts be banned or punished because they put a naughty word in a script years ago? Do we have to go back and look at all of our old scripts to make sure they don’t, god forbid have a naughty word in them that no one would see anyways? I for one used to use free models excessively, I’m sure at least one of them had a script with a bad word in them, do I need to check all my 200+ places for this? The final issue I would like to address is the fact that there has been no Roblox employee to respond to this thread to even make the slightest attempt to address our concerns. This honestly makes me feel like Roblox is going to ignore all feedback in regards to this update and not make any changes regardless of outrage. I have never seen developers discuss forming a union against this kind of stuff before.
Developers feeling like they want to start a “union” is probably going to grow stronger over time, with Roblox most likely going to ignore everything we say about this update, and releasing new updates similar to this. I mean, closed source modules removal had the same vibe.
I think a lot of the posts in this thread are going into full panic mode and making assumptions that we don’t yet know to be entirely true. Which, to me, just goes to prove that Roblox is poor at communicating. It would be nice to have been provided with actual concrete examples as to when our code may be automatically flagged, rather than just letting us use our imagination as to how “discriminatory language, personally-identifiable information, and real-life threats” are actually detected in our code.
The only basis that we have for this is the existing text-filtering system, so now we’re all running rampant thinking that our code could be flagged for having, oh I don’t know, numbers in it (which get filtered quite frequently in chat). I really hope the automated flagging system isn’t this crazy, but there’s currently no way to tell…
What if I decide to create some very lively npc’s in my game, with stored information that may be almost identical to “personally-identifiable information”? What if I want to make a Blacksmith npc by the name of “Herald Smith” and give him a realistic street address and everything (via code)? It’s not really that far fetched to imagine. I likely wouldn’t be banned for something like this, but from what I can tell, it gives the admins an excuse to dig into my code. But… again, I don’t know for sure.
There’s just too many questions, and I feel like half of these replies could have been avoided if the main post went into more detail. This is seemingly becoming far too common for Roblox announcements.
Yeah, I realize that there are a lot more things to add to that list, I’d just like to know how these things are flagged. At what point does something in code become identified as “personally-identifiable information”? I’m sure that there are a ton of false positives that don’t necessarily result in a ban, but there are probably a ton of ridiculous tiny things that open doors and give people an excuse to manually inspect your code. I’m assuming here, because this is how the current text-filtering works, so actual in-depth clarification would be real nice.
This isn’t really directed towards you, @caiusspiffy, but rather towards anyone with similar concerns:
I think that it’d be really helpful that, if you have any concerns, voice them in immense detail maybe somewhere between 1-3 posts and try to limit the shorter replies. It’s really hindering legitimate concerns by burying the more well-put detailed posts under a pile of rapid-fire replies. It makes it a lot more difficult for the moderators to take our concerns seriously.