I don’t understand why we would ever need this. Nobody will ever read your scripts, other than some exploiters.
What should be looked into, is the output, not the input. I, personally, like to keep my scripts as private as possible, to prevent stealing and such.
I take certain measures to keep my scripts hidden from exploiters, but now, this? It is very worrisome that somebody has the ability to look into my scripts, which I’ve spent hours on, and take it for personal gain.
This update seems very upsetting to most people. I can see how they are just trying to make Roblox a safer place. I know people sometimes do things to their script like add in some stuff, but who is going to see their script? I mean, only the developers can see the script, but everyone can see it if the game is open-source. If it is a regular game, the closest you can get to seeing a script is looking at the output from the developer console.
I would like to point out that the thread itself says:
Roblox moderation is not perfect by any means, but you also have to understand that this is a newly implemented change, and therefore if your script has been moderated in the past then that may have been because this policy was not in place at the time.
There are some kids in this world that signed up with a Google account faking their age to get past the YouTube dedicated age restriction.
There are some kids in this world that even willingly watch music videos like these because they find it catchy, funny, or found it in the moment that it was being shared a lot.
So why is Roblox so worried about scripts that a developer that is probably well over 13 years of age is doing inside of their Roblox Studio scripts?
No idea.
If the issue at hand is actually about the nsfw content about ‘condos’ that Roblox is still randomly being exposed to, Roblox shouldn’t be trying to make an extremely harsh system that terminates you because their automated system detected a naughty word in your game. There needs to be an appeals process. There needs to be due diligence.
Actual automation systems should be about accounts that are suddenly getting 100+ visits when the account is less than 10 days old.
Actual automation systems should be about what users are typing into a game, maybe log their keystrokes locally to see if users on a particular game are typing nsfw content since a developer can easily disable chat filtering since Roblox made their chat system open source.
There’s many more ways to detect for malicious users. A blanket for instant termination for bad words in a script over everyone is not going to be the solution I’ll approve.
Yes! This is a great idea. On the topic of switching back to the actual topic, do you have any comments on how you feel about using automation to flag code? I see a lot of people disliking it (me included) but It is always a good idea to see everyone view point.
Perhaps you have shared already (but there is 1000+ replies and it is hard to keep track without monitoring this thread 24/7)
Using automation to flag code is wrong. The automation for text is difficult, because text is a very human thing. Surprisingly, machine code is even more human, and requires more context, due to logic on how code functions and why a developer might have put a seemingly bad string out of context.
Edit: By machine code, I just mean code, not literal machine code that you would find the next level up from binary code
This appears to be related to their push for more security on the platform. Hence the new HTTP Plugin Permissions Update.
The push for more security could be related to the Meep City incident that occurred recently. (If you don’t know what I’m referencing just search up “meep city hacked 2020” or something)
Oh yes, I think it’s err… Not great. I say we need to do manual, because if it’s in automatic, it’s gonna be over 90x easier to get someone banned. Which I stated in my overall opinion on this subject. If it’s human, they can probably tell what’s going on, and if it’s fair to use the swear words. Like extra chat-filter and etc etc. But I also don’t really want humans either. As they get access to person information such as API Keys, and Bots on certain dizzy servers. So I really just think this shouldn’t be released overall.
You do not get immediately banned if you are flagged. Upon being flagged you are subject to manual investigation with a “specialized team”.
I am worried said specialized team will not be able to handle the sheer amount of stuff they would have to go through. Depending on the code it could be very hard to tell what it actually does. Especially if it becomes obfuscated.
I think automation for moderation is cool, but I don’t want things that I choose to keep private to have to get searched.
Wrong, read what NickoSCP said. This system is very broken. Like just think of this. You make an extra chat-filter to prevent free robux scams, you go grocery shopping (or anywhere), and come back to being terminated… Immediatly. Banned for no fair reason. So that is wrong, and even if there is a team, they are not very good at their job as Nicko still got banned for unfair reasons.
The parts of my code that are visible to players of my game can be touched and reviewed, but do not touch the rest of my code.
I am writing my code on the way how I want to write it. If I want to put some notes in it for myself (which may contain numbers or other things that are filtered in the rest of Roblox), I should be able to keep them in.
How would this be handled if a malicious plugin were to inject/plant backdoor code that contains profanity?
Synapse Xen a popular obfuscator contains lines of profanity in the code. Would the person NOT AWARE that this script exists in the game be liable/responsible for the script containing the language that was implemented by a malicious plugin?
I’m assuming that there is no way to check if a script has been implemented by a developer or plugin.
If that’s the case then new coming developers are probably the most likely targets of this moderation.
So it would be safe to assume that we have alot of false bans ahead of us because of the MASSIVE-MASSIVE malicious model/plugin issue that already exists.
And if I’m gonna speak about plugins one more time, implementing plugins as a marketplace to profit out of made the backdoor issue increase 10 fold due to the fact that people would get the purchased plugin, copy the source and implement its backdoor code and release it into the marketplace for free.
You essentially introduced a unstable and unfinished system into ROBLOX with no insurance/protection of the seller and no way to actually stop people from spreading the sources of the plugin. As far as I am aware there was no effort put in to actually assist developers wanting to profit from the tools they provide the community.
Wow, The responses doubled since I check it last time. Roblox you really have to reconsider about this base on chaotic controversy.
While most of the replies are criticizing about how this should not be implemented, I have to be honest here, the initial point and the mindset of this does make sense for Roblox, as a kid friendly platform. It makes sense that Roblox should not allow any inappropriate content that is built on the platform regardless it’s visible to the public or not. I did see a Free Model with script which included links to inappropriate websites with adult content before, despite the fact that it was 2 years ago. However, the way that Roblox is running this is not appreciable.
Firstly, transparency. This has been ran for a month without notifying developers? This is not a good initiative to implement such controversial system. The output of this can be seen by these 1k replies in this thread.
Secondly, lack of information and misconception. I’ve heard a developer got terminated because a script which filters bad words. I’m not sure what happened to him/her now, but surely this will leave us all a bad impression. It’s also a major concern on the “professional team”. We know nothing about them, we can choose not to trust them because it’s our code and maybe they included some sensitive data as mentioned in threads above.
Last but not least, this is not a major problem. This might not be directing to the OP, but it makes sense right? We all know there’s a room of improvement can be made regarding on the Roblox moderation system. Not only this moderation harms player sometimes, it also impact developers in such ridiculous way. A developer can get banned for 3 days while uploading a harmless image. Allow me to discuss and talk about this here please. I uploaded 2 versions of a same audio because the first one was too quiet to hear, then the first audio got me banned AFTER A YEAR while the second one remains normal. This problem is more urgent that moderating something private and cause no urgent harm to anyone, right?
I don’t really need anymore constructive argument, most of the threads has already covered and represented me. I hope Roblox will take a better action in the future.
I did see a Free Model with script which included links to inappropriate websites with adult content before, despite the fact that it was 2 years ago.
If we want to get specific and very technical. The botted models that contain malicious scripts which are most likely obfuscated with Synapse Xen, contains vulgar language as it’s a default with the Synapse Xen obfuscator.
Jesus Christ, you can’t add this update, no you can’t !
Every developer’s code is his code and nobody has the right to see it, personally I don’t want any human from your team to check my code, and considering how bad your moderation system and bots are, obviously I could get my account banned for nothing. What if an exploiter gets every code leaked just because of a “security” issue? We all risk our games to get exploited and maybe stolen, considering you will also check the server-side code! I highly suggest improving your current moderation system and forgetting about this horrible idea.