I’ve said this countless times… filth cannot be cleansed. It’s the equivalent to cutting a hydra’s head off, it’ll just grow back. You get rid of one inappropriate game another will surface. You cannot win this battle no matter how hard you try.
You’re taking “model” under the wrong context. This is under the context of a free-model in the page. It can either be scripts or bricks. I’m talking about scripts containing malicious code as I specified that portion of models.
My apologies. Although I did read, I thought I understood it as “3D Models,” and also models that are uploaded, but not including scripts. Thank you for clarifying. But, in regards to scripts as models, I don’t believe they will be overlooked as they are published just as games are published as well. This is just speculation, but at the same time everyone here is speculating that Roblox isn’t doing it already. It may already be happening as we speak, but we don’t know because we don’t have anything specific about this topic. So, now that you mention it, that made me just a bit more pissed about how deep this goes with the lack of information.
This change is unnecessary. Programmers have their codes moderated and accounts are being terminated because of it.
If this change is not going to be removed then at least apply the change only for open sourced code. Your chat filter is not the best so why would that automated moderation system be?
So, does this mean if I accidentally install a backdoor plugin and then it will put some backdoors, does my account gets yeeted because some backdoor plugin decieded to put viruses and I didn’t knew about that? I’m actually scared because some of my friends may accidentally install backdoor plugins and then their plugins insert viruses and backdoors around my place.
Knowing that this thing is now going around, I am a bit unsettled by this policy. After looking through the post and other people’s views, I can definitely say/agree on a few things that should be considered.
-
Having one stray piece of code that is considered “inappropriate” can surely put a lot of developers at risk, warning or not. This means a lot of games could start disappearing off the face of the Games page, by any chance - a popular one.
-
I wonder what happens when a heavily-dependant and long code needs to be checked… probably gets skipped straight to the flagged piece of code.
-
Everything is never truly safe on the internet for anybody of any age at all. All we’re really trying to do is attempting to keep certain ages away from malicious things, but we can never fully prevent them.
-
If we’re still going through with the policy, at least start revising moderation in general for the future of the platform. If there’s proof that there is an actual moderation team, I’ll probably be surprised. Chat filtering is already a sign of
badslightly-mediocre effort to prevent vulgar language. -
(loose) A portion of the total developers (like me) could possibly only trust themselves with their own codes. This policy does vibe with that invasive feeling that leaves those developers concerned.
-
(loose) Adding onto number 5, people being allowed to see the developers’ codes defeats the possible purpose of the developer console provided by ROBLOX. The only thing that actually matters to us the most is the errors and mistakes we make, not the variables we label as “inappropriate” for the moderation team to look over.
As for me, working on one of the games during these times, I feel like I could code something in that eventually hits me back right in the head a few days, months, or years later after release.
Overall, cannot express this enough, don’t.
I don’t understand why we would ever need this. Nobody will ever read your scripts, other than some exploiters.
What should be looked into, is the output, not the input. I, personally, like to keep my scripts as private as possible, to prevent stealing and such.
I take certain measures to keep my scripts hidden from exploiters, but now, this? It is very worrisome that somebody has the ability to look into my scripts, which I’ve spent hours on, and take it for personal gain.
Highly disagree with this.
Although I agree, there’s one exception, the dev console (F9). Things like variable names will be outputted there if your script errors.
- What if developers start to obfuscate their code because of this?
- Will that become something that’s also against the rules?
- If so, how would that even get detected?
- This “update” is incredibly flawed, creates far more problems than it solves, and literally nobody wants this.
This update seems very upsetting to most people. I can see how they are just trying to make Roblox a safer place. I know people sometimes do things to their script like add in some stuff, but who is going to see their script? I mean, only the developers can see the script, but everyone can see it if the game is open-source. If it is a regular game, the closest you can get to seeing a script is looking at the output from the developer console.
I would like to point out that the thread itself says:
Roblox moderation is not perfect by any means, but you also have to understand that this is a newly implemented change, and therefore if your script has been moderated in the past then that may have been because this policy was not in place at the time.
There are some kids in this world that signed up with a Google account faking their age to get past the YouTube dedicated age restriction.
There are some kids in this world that even willingly watch music videos like these because they find it catchy, funny, or found it in the moment that it was being shared a lot.
So why is Roblox so worried about scripts that a developer that is probably well over 13 years of age is doing inside of their Roblox Studio scripts?
No idea.
If the issue at hand is actually about the nsfw content about ‘condos’ that Roblox is still randomly being exposed to, Roblox shouldn’t be trying to make an extremely harsh system that terminates you because their automated system detected a naughty word in your game. There needs to be an appeals process. There needs to be due diligence.
Actual automation systems should be about accounts that are suddenly getting 100+ visits when the account is less than 10 days old.
Actual automation systems should be about what users are typing into a game, maybe log their keystrokes locally to see if users on a particular game are typing nsfw content since a developer can easily disable chat filtering since Roblox made their chat system open source.
There’s many more ways to detect for malicious users. A blanket for instant termination for bad words in a script over everyone is not going to be the solution I’ll approve.
Yes! This is a great idea. On the topic of switching back to the actual topic, do you have any comments on how you feel about using automation to flag code? I see a lot of people disliking it (me included) but It is always a good idea to see everyone view point.
Perhaps you have shared already (but there is 1000+ replies and it is hard to keep track without monitoring this thread 24/7)
Using automation to flag code is wrong. The automation for text is difficult, because text is a very human thing. Surprisingly, machine code is even more human, and requires more context, due to logic on how code functions and why a developer might have put a seemingly bad string out of context.
Edit: By machine code, I just mean code, not literal machine code that you would find the next level up from binary code
This appears to be related to their push for more security on the platform. Hence the new HTTP Plugin Permissions Update.
The push for more security could be related to the Meep City incident that occurred recently. (If you don’t know what I’m referencing just search up “meep city hacked 2020” or something)
Oh yes, I think it’s err… Not great. I say we need to do manual, because if it’s in automatic, it’s gonna be over 90x easier to get someone banned. Which I stated in my overall opinion on this subject. If it’s human, they can probably tell what’s going on, and if it’s fair to use the swear words. Like extra chat-filter and etc etc. But I also don’t really want humans either. As they get access to person information such as API Keys, and Bots on certain dizzy servers. So I really just think this shouldn’t be released overall.
You do not get immediately banned if you are flagged. Upon being flagged you are subject to manual investigation with a “specialized team”.
I am worried said specialized team will not be able to handle the sheer amount of stuff they would have to go through. Depending on the code it could be very hard to tell what it actually does. Especially if it becomes obfuscated.
I think automation for moderation is cool, but I don’t want things that I choose to keep private to have to get searched.
Wrong, read what NickoSCP said. This system is very broken. Like just think of this. You make an extra chat-filter to prevent free robux scams, you go grocery shopping (or anywhere), and come back to being terminated… Immediatly. Banned for no fair reason. So that is wrong, and even if there is a team, they are not very good at their job as Nicko still got banned for unfair reasons.
What about obfuscated code? Do we get punished for that?
Exactly, so don’t touch it!
The parts of my code that are visible to players of my game can be touched and reviewed, but do not touch the rest of my code.
I am writing my code on the way how I want to write it. If I want to put some notes in it for myself (which may contain numbers or other things that are filtered in the rest of Roblox), I should be able to keep them in.
As @wevetments said above: