I might be wrong about this, but assuming you’re referring to library access, your own modules get statically linked if they’re also compiled with native codegen.
However, it does not change the implementation of code that is already provided to your script by Luau libraries …, Roblox Engine …, or other module scripts that you require if they don’t have a --!native annotation.
So hypothetically you could run the resulting code barebones if you don’t need any VM/engine libraries.
Great idea, HLLs suffer from slow interpreters.
I’m curious what the downsides of adding this to every script are actually?
Wouldn’t this reduce experience size, and increase performance?
To be perfectly honest, I thought this was how it was done before this was implemented!
Also, just checking to make sure that this works well with parallel lua right?
I’m curious what the downsides of adding this to every script are actually?
Wouldn’t this reduce experience size, and increase performance?
The native code is significantly less compact than bytecode and as such takes more memory; this is not a big problem when you have a small script (e.g. 1000 lines of code) that is doing a lot of work, but it can be a problem if you have 100’000 lines of code that mostly don’t benefit from native code compilation (and if you think 100K lines is a lot, Roblox app is somewhere around 1M…)
Also, just checking to make sure that this works well with parallel lua right?
Yup - fully compatible with parallel script execution. We have more optimizations planned in the future when these features are used together.
I’m usually not a fan of the ‘automatic’ approach, as it doesn’t give us much control. But when it comes to compilation, this is absolutely the right way to go.
The compiler knows best.
Very interested in seeing how this feature will improve over time.
Just did a quick benchmark with a serializer I wrote.
Pre native code-gen, serialization took about 1.5 seconds on average.
Post-native codegen, serialization took about 1.7 seconds on average.
Enabling the preview saves us between 100-300ms when serializing with a few hundred MBs of data. Pretty good!
so if Im reading this right its not that this feature speeds up individual things like events,in-built functions,etc, but instead will allow a big group of code that normally has a lot to do run faster?
an example:
function gets all players.
function performs a bunch of checks.
function puts all players in a seperate table.
function loops through table and does some more checks.
although an unpractical example would this be something that this feature is gonna improve and if so would this be mainly reducing the impact the server would normally have?
This is great - however, will the code be compiled ahead of time or every launch, and if it is per launch - will there be any noticeable delays (specifically when joining experiences)? Is this the primary reason why code is selected based upon a presumed performance increase instead of every line of code?
Will we have control over what parts of our code can be optimized with this? I can see having that control being important as not every case would be caught by the mechanism? Having control over what exactly is native can be good to prevent over-generation as a potential use case for this.
Since you mention debugging is lost when using this, can an option be added somewhere to the topbar to disable all of this in studio when you need to debug something? This would be significantly less annoying than having to manually remove the flag from N scripts.
Generally speaking, no - we’re heavily prioritizing compilation speed in our implementation. Of course ultimately it will depend on the amount of script code using this feature. When we get closer to a release we will publish our findings in terms of impact here.
Yes, we plan to have both better automatic heuristics as well as developer control over whether this is active.
In general, in our experience changes that make interpreted execution faster also make native code faster, so maybe not? It depends on the types of optimizations, but all fundamental rules such as “fewer calls” and “fewer allocations” still hold.
You can disable the beta feature if you want to debug native scripts. Note that we intend to restore debugger compatibility in the future.
I mean specifically for when this is no longer in beta. If debugging is not possible before that happens, a toggle for this feature in studio would be very appreciated. Regardless, that’s good to hear.
wooow that’s crazy i never thought we will get native code generation this soon this will definitely push roblox to new boundaries with faster code!! I tried it out on my pathfinder algorithm and i can confidently say that im getting over 30% performance boost
This comes in at an amazing time. I was about to write some pretty resource intensive pedestrian/traffic simulation scripts for my game, so knowing that at some point in the future they’d get massive performance boosts essentially for free is really nice.
Will this restriction be lifted in the future? Or is there limitations (e.g., App Store policies) which prevent this from being implemented outside of Studio in the future?
why would it be an app store policy?? they are just testing it and are actively working on it so it won’t be in production games until it’s production ready, just like every other beta feature…?
There is an argument to be made about native code running on locked down devices such as iOS that cannot be verified during the time of app publishing. While Roblox is sandboxed, you can never know if there isn’t a security hole somewhere, opening up access to the system in some way.