Removing Support for Third Party Closed Source Modules

So Roblox is looking into providing a replacement eventually then? It sounds like you’re saying that but I thought I would ask to make sure there’s no confusion.


@Seranok stated earlier in this thread that this was the case. The reason there is so much outrage is because he said it will take up to a year (if not more), but by then all of the groups and services that rely on this functionality will have died.

1 Like

There isn’t a replacement coming. Sandboxing is what would take time, however…

Since the modules will only be open source, the addition of sandboxing won’t be relevant to the companies groups and services affected today.

1 Like

I was just referring to what was stated earlier on in the post, but thank you for clarifying.

The costs of working on that outweigh the benefits I guess.

Hi there,

(If possible to be answered)
Does Roblox currently intend to increase further support for Intellectual Property protections on the the Roblox platform? If so, what sort of things are Roblox thinking about? (Since this method is now gone)

That could be expansion of licenses for assets…

An “original asset” program, which allows for developers to identify their profit as the original version for those who assets are commonly abused.

Or another idea that may come up, from the community or Roblox staff.

As much as I may love the community, I feel it’s important that I would want to protect my product and allow fair competition, if everything was perfect I wouldn’t need to worry about this sort of thing, ofc it’s not and I full believe in protecting my product and not people using my work to cause unfair competition.
I want to distribute my code under contract and make sure anyone who hasn’t signed an agreement does not have access to versions of our code that are readable or functional


If there is a pop-up explaining that the opt-in could possibly be dangerous and the player specifically hits “agree” to the risks, then they can not claim that they’re not liable because they are actively agreeing, whether they can tell what the module is doing or not. I would understand your argument if they just clicked a check box and nothing popped up explaining the risk they’re taking, but I’m advocating for a warning window that you must first accept. This window can explain that the developer of the module can suddenly change the code to be malicious.

I’m not asking you to make an opt-in option permanent. It only needs to exist until 6-12+ months from now when a replacement is created.

Literally every single instance that you bring up closed source modules, you’re talking about them doing something malicious. Not once did you explain why exactly developers need to have visibility into all non-malicious third party code. I personally understand why many developers would want such visibility, but there are legitimate circumstances where complete visibility is not applicable. Those circumstances have been mentioned many times on this thread.

Edit: Also, if this isn’t just about maliciousness but also visibility, how exactly is the problem of “visibility” going to change 6-12+ months from now when you guys create a replacement?


I fail to see how a game creator consenting to the use of Private Modules doesn’t put them at fault for malicious modules in their game. That is why it would be an “Opt In” feature


I am not understanding, if the point of this update is not about preventing malicious code and it is only on what code they are importing, couldn’t you just not import it. Users should have a choice if they want to use private code or not use private code.

If the require function is obscured or a getfenv function is used, ROBLOX should display a notification saying this uses a require function closed source or not.

[I also thank you for the official statement]


Exactly. If at all during the owner’s opt-in there happens to be a backdoor in action, they can easily opt-out and see i it’s still happening. If not, guess and tell which module you’re requiring. It’s not hard.


Yeah, demonizing closed-source modules is why this thread erupted into debate. This all boils down to two subcultures on Roblox: those who make games (who were threatened by malicious outliers) and those who provide services (who used modules benevolently). Now it’s the service-providers who are threatened. We shouldn’t have to punish one half of the community to save the other. Compromise, please?


It doesn’t matter if the developer doesn’t care about what’s been put in their game. Roblox is taking steps to take away power from malicious users, and one of the big steps was last year when the awful experimental mode finally got removed. Sadly, the removal of experimental mode means nothing if games are just going to have back doors in them anyway.

This change isn’t to help developers not put unsafe code in their places. It’s to protect all the players who don’t choose what disturbing things may happen in a game they enter due to the carelessness of the developer. Roblox needs to be able to hold developers accountable for what happens in their game, and private modules don’t let that happen.

I think Roblox has full rights to hold the developer accountable. The developer is the one who created the game. Any content they decide to put in the game is entirely their fault; even if done by accident. I am for increasing the security on this but removing it as an option entirely is not an acceptable solution. Roblox can simply add a log everytime require() is called on an asset so Devs are aware if unfamiliar Modules are being called. They can allow devs to whitelist assets. Yay for security; Boo for removing a feature entirely when there are plenty of solutions


How is a developer responsible for using code that they can’t see what it does? The module could be fully functional and do what it says it does, while also being a back door. It’s not up to the developer to know code is malicious if they can’t even see the code. That’s a ridiculous statement.


I think a lot of people here, mostly in their own interest, ignore the fact that Roblox is above all else invested in being a safe place for children. Features that allow for blackbox code in games are inherently not safe for children.

Reguardless of if the developer is at fault or not, Roblox and their image is held accountable for anything bad that happens on their platform.

It risks situations that could be PR nightmares for Roblox as well as have serious other reprocussions if they were deemed “not trying hard enough” to prevent such issues. And worse yet issues could result in countless “good users” who trusted modules that they had no reason prior to distrust in a bad situation.


Except Roblox has been through some tough PR for inapprorpiate content like that before, and it’s something they take very seriously, as any respectable company that’s kid friendly should. To say these things were not PR nightmares for them shows a willing ignorance to the amount of effort they go through to keep their site respectable for all audiences.

That’s the point though, it’s very hard to identify if something you have no ability to view the source code to is doing something it shouldn’t. That heavily hampers the ability to simply delete or report a closed source module.

Roblox has to think of the platform as a whole, not just what developers looking out for their own private interests want from the platform. Roblox can be held accountable for the actions of people on their platform if they didn’t take reasonable steps to prevent it, and allowing users to upload unaudited content (that isn’t reasonable to audit as a company) that can only be viewed/moderated by the company can put them in liability risks.

I’ve worked in Customer Service for over 3 years and a flaw can exist for years before suddenly costing a company a fortune, irreversable reputation damage, and worse when a strong enough case is built against them. I’ve been with companies when it happened to them.

Roblox wouldn’t be able to defend themselves by saying, “We take great lengths to protect our users and hold developers to a strict community standard as part of our Terms of Service” if they were not allowing those developers to know review the code in their own games. This shifts blame reasonably to them as a company, as they are not allowing users on the platform to review the content prior or during use even.

That is a huge flaw. They’ve clearly stated the security risks of private modules are not a risk they are willing to take. Although they haven’t talked details on what they see those risks as, they don’t really need to as there are plenty of reasonably considered risks.


How can you tell what is causing it if you can’t audit the code? These things can be updated remotly so you may not have even editted your game in months, and then suddnely this issue could start.

You keep asserting it’s easy, but let me posit this: Out of every month, 2 days are selected in which 1/100 users who visit the place are exposed to inappropriate content, but never the place’s owner and never in studio.

As the developer, discovering this is an issue and what’s causing it would be nearly impossible. You’d have to immidatly gut your place of anything that required a module you personally didn’t make. Testing where it came from could take months, since the issue is so sparse. Little to your knowlege the malicious user also disables this code from time to time since he knows you’ve been made aware of the issue. Now you need to hope that when you are testing one of these assets it happens to be during the time it happens to do this activity AND the malicious user happens to have it on.

Meanwhile the impacts of it are great on you (as single users reporting your place for this could result in you being banned) and could cause huge impacts to Roblox’s image (let me remind you that in July of 2018 there was a large PR backlash after one mother saw one instance with innappropriate imagery on the Roblox platform).

Again, Roblox can be held legally responsible for such things if it’s deemed they are not doing enough to protect users, especially in the case where they can’t make the argument, “Developers are responsible for vetting the content they publish” which they can’t make if they allow content only Roblox is capable of vetting.


Why does being able to see the code of a script mean being able to know that it’s causing it? Delete/disable it and then you will know…?

1 Like

You can’t know if you can’t see the code and the behavior isn’t consistent.

Being able to see the code means that the developer can review the code. If they can’t understand it Roblox has resonable grounds to say, “It was the users fault for using the code as we gave them the ability to audit it” as opposed to them having no grounds to say that as they made it so only Roblox could audit it, and experienced developers can review the code and confirm if it’s legittiment or doing something wrong.

There is a long history of developers now found in this forum but dating all the way back to the original Roblox forums who would help review popular scripts and report to the community when they found things that were hinky, or things that were plain our malicious. You can’t do any of that if no one can see the code.

You seem insistent on grabbing single sentences and then questioning them even when they have been already answered and frankly I’ve said my whole piece at this point. Roblox has already stated that private modules are not a security risk they are willing to take as a platform.

If Roblox creates a system where they are the sole party who can review user created content they accept responsability for what that user created content does. They can say they don’t, and users can agree they don’t, but they can still be held to a legal standard for their business conduct and their platform’s content.

A keen example: storefronts don’t make the content they sell, but can still face lawsuits if they don’t take measures to protect their users from fraud, or expose children to inappropriate content without adequate protections.

In the same way, Roblox can be held accountable for not providing children adequate protection against inappropriate content distributed through their platform if they can’t prove they have appropriate measures in place to combat it. They don’t have appropriate measure to combat code only they can vet.