Then you didnt read the post - this has nothing to do with typing or variables, but rather changes how some of your code executes entirely.
You did not read the post at all. It has nothing to do with that
Great idea, HLLs suffer from slow interpreters.
I’m curious what the downsides of adding this to every script are actually?
Wouldn’t this reduce experience size, and increase performance?
To be perfectly honest, I thought this was how it was done before this was implemented!
Also, just checking to make sure that this works well with parallel lua right?
I’m curious what the downsides of adding this to every script are actually?
Wouldn’t this reduce experience size, and increase performance?
The native code is significantly less compact than bytecode and as such takes more memory; this is not a big problem when you have a small script (e.g. 1000 lines of code) that is doing a lot of work, but it can be a problem if you have 100’000 lines of code that mostly don’t benefit from native code compilation (and if you think 100K lines is a lot, Roblox app is somewhere around 1M…)
Also, just checking to make sure that this works well with parallel lua right?
Yup - fully compatible with parallel script execution. We have more optimizations planned in the future when these features are used together.
This is absolutely incredible.
I’m usually not a fan of the ‘automatic’ approach, as it doesn’t give us much control. But when it comes to compilation, this is absolutely the right way to go.
The compiler knows best.
Very interested in seeing how this feature will improve over time.
I see! I had a misunderstood the file size of byte-code vs. source code.
Thanks for the clarification, and great update
That seems very exciting! I definetly see this making 2D physics engines and lighting computation much more feasible in the near future.
Just did a quick benchmark with a serializer I wrote.
Pre native code-gen, serialization took about 1.5 seconds on average.
Post-native codegen, serialization took about 1.7 seconds on average.
Enabling the preview saves us between 100-300ms when serializing with a few hundred MBs of data. Pretty good!
so if Im reading this right its not that this feature speeds up individual things like events,in-built functions,etc, but instead will allow a big group of code that normally has a lot to do run faster?
an example:
function gets all players.
function performs a bunch of checks.
function puts all players in a seperate table.
function loops through table and does some more checks.
although an unpractical example would this be something that this feature is gonna improve and if so would this be mainly reducing the impact the server would normally have?
This is great - however, will the code be compiled ahead of time or every launch, and if it is per launch - will there be any noticeable delays (specifically when joining experiences)? Is this the primary reason why code is selected based upon a presumed performance increase instead of every line of code?
Will we have control over what parts of our code can be optimized with this? I can see having that control being important as not every case would be caught by the mechanism? Having control over what exactly is native can be good to prevent over-generation as a potential use case for this.
Will this make luau micro-ops less worth it?
Absolutely tremendous. This is fantastic to see.
Since you mention debugging is lost when using this, can an option be added somewhere to the topbar to disable all of this in studio when you need to debug something? This would be significantly less annoying than having to manually remove the flag from N scripts.
Generally speaking, no - we’re heavily prioritizing compilation speed in our implementation. Of course ultimately it will depend on the amount of script code using this feature. When we get closer to a release we will publish our findings in terms of impact here.
Yes, we plan to have both better automatic heuristics as well as developer control over whether this is active.
In general, in our experience changes that make interpreted execution faster also make native code faster, so maybe not? It depends on the types of optimizations, but all fundamental rules such as “fewer calls” and “fewer allocations” still hold.
You can disable the beta feature if you want to debug native scripts. Note that we intend to restore debugger compatibility in the future.
I mean specifically for when this is no longer in beta. If debugging is not possible before that happens, a toggle for this feature in studio would be very appreciated. Regardless, that’s good to hear.
wooow that’s crazy i never thought we will get native code generation this soon this will definitely push roblox to new boundaries with faster code!! I tried it out on my pathfinder algorithm and i can confidently say that im getting over 30% performance boost
without native code
with native code
Amazing work!
This comes in at an amazing time. I was about to write some pretty resource intensive pedestrian/traffic simulation scripts for my game, so knowing that at some point in the future they’d get massive performance boosts essentially for free is really nice.
Will this restriction be lifted in the future? Or is there limitations (e.g., App Store policies) which prevent this from being implemented outside of Studio in the future?
why would it be an app store policy?? they are just testing it and are actively working on it so it won’t be in production games until it’s production ready, just like every other beta feature…?
There is an argument to be made about native code running on locked down devices such as iOS that cannot be verified during the time of app publishing. While Roblox is sandboxed, you can never know if there isn’t a security hole somewhere, opening up access to the system in some way.
As mentioned in the post, our initial target for the production release is Studio and server scripts. So the goal is to implement it outside of Studio in the sense that it will work in production games, just not on clients initially due to a lot of extra complexity we’d need to resolve with that.
I’ve enabled Luau Native Code and added the --!native
tag at the beginning of all of the scripts I intend to use this with but I’m actually seeing no improvement in terms of performance.
Here’s with --!native
in all Scripts and ModuleScripts involved:
And here is without --!native
This is a boid simulation with about 250 entity simulations running each frame. It heavily relies on arrays. I’m curious if maybe I’m doing something wrong? I believe I’ve followed the setup correctly, and I made sure Studio is up to date. I’m surprised to see no change at all so I’m guessing I’ve just made some mistake somewhere in the setup.
I will add that all of my code uses --!strict
as well and should all be type-checked properly