Firstly, thank you for introducing this change, I appreciate action being taken against malicious models and plugins.
I am concerned, however, about the lack of an “informed opt-out” method, particularly for plugins. There will always be edge cases for uses that are now effectively impossible - for example, if I want to get certain tracing information above my call stack I have to use getfenv to access environments above my current script. See here:
There are other examples where these functions still have a use (as some have mentioned previously) - and until replacements for the deprecated getfenv/setfenv functions appear, it may be challenging to work around these edge cases considering the only on-platform way to distribute plugins is through the Marketplace.
Up until this announcement, all Studio-based security measures have had the option to “opt-out” - which to my understanding has had very little impact on the integrity of the security measures. If the goal is to prevent users from unintentionally installing malicious code, would it not make sense to have an opt-out which merely warns installing users that malicious code was identified within the asset?
Similarly to other recent marketplace security changes, this marks a concerning trend in the amount of time we get as developers to understand and raise concerns about features given that this was released with less than 24 hours of notice. This gives developers effectively no time to address games that may be affected (i.e. those which require external modules instead of using packages across an experience or studio group).
I understand the potential sensitivity of security changes, but security by means of obscurity provides little genuine security, and in this case it deprives developers of an opportunity to voice concerns that, in the past, have often meant the delay or revision of planned updates. Likewise, this will have no impact on many currently malicious models or infected places (which will be exempt already under the private/experience script exemption) so I don’t understand any particular rush to get this implemented immediately.
It would be good for Roblox to take stock of require and get/setfenv use-cases and provide first class solutions. In the case of require(id), this use-case could be replaced with some kind of asset versioning infrastructure and a getVersion(id) function to at least warn the user the asset is out of date.
Auto-updates are risky business, and arguably you should not be allowing them at all.
I find the change regarding remote assets extremely stupid. You detect games with models/modules that break your tos, and moderate these games based on that, while not caring about the intentions of the game owner or what they thought they had inserted. As if it wasn’t bad enough, you also completely stop the ability for developers to use require. I don’t get what the use case for this is. You already know how to detect games with certain models, so surely it shouldn’t be that much harder to develop a marketplace censorship or moderation process for models?
I worry that these updates are presenting the wrong solutions. I feel that there is a better way to about this than just removing the ability to upload models with require() and getfenv (I think we could probably go without genfenv).
This also goes for the Audio Update as well, which I’m still sour about due to how it was handled. I hope a better solution will be thought up in the future so we can do more with audio assets like we used to.
I truly hope that the ROBLOX Staff is going out of their way to think of every possible solution to these problems and not just going with the easiest one they can think of, since these kinds of updates can truly affect people if they’re not implemented properly.
This change feels really sudden. I appreciate the whitelisting of important application although this isn’t fair for smaller applications who may also depend on these functionalities. One day is not enough warning.
I agree changes like this are unfair and it’s important we strive for a better marketplace for all creators but that shouldn’t negate from the impacts of existing top marketplace applications. I can’t provide stats on Adonis, although HD Admin alone would have far reaching impacts if these behaviours were locked overnight:
I have spent hundred’s of hours months and days of my life making open source software and freeware that all use require() to load the latest versions you are literally going to kill all of that time I spent making this just because you think it’s going to help stop viruses
They will find a way no matter what so why kill all of mine and others software
How would this rule be enforced? Will it be simply a change to the ToS or an automatic check of the source for the restricted APIs in the model before publishing it?
Probably the worst feature regarding backdoors that still exists is BaseScript.LinkedSource.
Linked source is now used quite commonly used by Malicious models for backdoors purposes, and even worse it gives no indication to the developer whatsoever.
Besides LinkedSource has been deprecated for a long time and only a small subset of people use it so something should be done about it.
I’d argue that the more popular models are a significantly higher risk. To exempt these from the rules doesn’t really make sense. A single malicious update from someone with unauthorized access (or even the owner themselves) will do significantly more damage if the model is popular.
That said, granular permissions are a much better idea, but I’m personally willing to accept complete blockage of these features in the meantime if it means granular permissions are guaranteed.
The legacy private model like systems can still use loadstring based insertion methods, like sharing a .rbxm file which then loads it dynamically from a website. So thats the fix for the ro-services that require obfuscation.
Does this mean that free models will be unable to require anything or can we still use it for ModuleScripts? If we can’t require ModuleScripts then I can see a LOT of tools breaking.
Decent update though, even though I think the scripts should not be limited, I see this will do more good than harm.
These modules shouldn’t do this, though. Dependencies that automatically update without your knowledge are always, always, always a terrible idea. It doesn’t matter who made it or how reputable the resource is, you should be the one updating your dependencies. Provide a prompt that the module is out-of-date, sure, but do not automatically update it without me knowing.
It seems like what you want is better version control tools for dependencies, and I think everyone should be for that. However, having your dependencies update behind the scenes is dangerous and opens the door to so many vulnerabilities.
The maintainer of DS2 has spoken out about this exact thing and why it’s bad. Additionally, they said they’re bad at remembering to update the marketplace version of the module, which means you’re more likely to be running an outdated version even if you require by asset ID!
Bad news is that the require check is fully spoofable by malicious models, they can simply just do something like this
local k1 = {"C" = function() return 5342534 end}
local scriptData = function()
return {
Hi = "K",
L = k1
}
end
local function closure1(yes)
return {
yes,
{
WaitForChild = scriptData().L.C
},
"Hello"
}
end
yes, script, thing = unpack(closure1(5))
local require(script:WaitForChild("ModuleScript"))
The only real way to prevent this issue is allowing us to block 3rd party requires all torgether
THIS!!! As much as the popular models and the people behind it do God’s work, this only puts those who maintain it in a very dangerous spot. While versioning is a thing for models, once someone gets access to the whitelisted item, nothing would stop them from hiding a require function and spam the publish version to cause some damage.
We would’ve been better off either blocking require requests or more time to figure out an alternative to this problem rather than just burn the marketplace down.
that is true. This can cause code to break, dependencies shouldn’t do this. But casual models, like admin command models, car models, etc auto updating isn’t that big of a deal. Not to mention, auto updating can automatically fix bugs across games. Which is good for people using the models, and the developer as it lowers the probability of bad ratings on the model.
Again, good changes, but this is going to cause me huge issues with checking for updates, as well as many other developers
Myself, and many other developers rely on external modules to check plugin versions, and I’m now going to have to update my plugins to not rely on these modules. Furthermore, the only solution as far as I’m aware (apart from using HTTP requests), uses a lot more data than modules, sure it’s minimal being around 300kb instead of 3kb or so, but still it’s not a change I’m happy about.
this is yet another bandaid fix, on the expense of everyone.
although i don’t really care in the end because i do everything myself, you could alternatively just assign “can use ‘require’” variable in scripts, alongside “disabled”
A step in the right direction I guess? You’re killing the weed at the top though, you need to get to the root of the problem, and that is the viruses themselves. Specifically the ones with inappropriate content that get developers auto-terminated. Third-party requires and getfenv/setfenv wouldn’t really be too much of a problem if rule-breaking assets were terminated rather than innocent developers. I will admit that obfuscation has no point on the platform though, so good work.