Luau Native Code Generation Preview Update

Hello Creators!

Back in August 2023, we announced Luau Native Code Generation Preview [Studio Beta].

We are now happy to announce that the preview of this feature is now available on Roblox servers and in Roblox Studio without having to enable a Beta feature!

As a refresher, native code generation is the feature that allows Luau scripts that have been previously executed by interpreting bytecode inside the Luau VM to instead compile to machine code that the CPU understands and executes directly.

For more information, including the information on when it’s beneficial to enable native code, we recommend reading both the previous topic as well as the new documentation page: Native Code Generation.

Changes from the Studio Beta

Since the release of the Preview as a Studio Beta feature, we have worked on improving code performance, memory use of the system, correctness, stability as well as integration with the tooling we have available in Roblox Studio.

Some highlights:

  • Improved performance of the bit32 library functions
  • Improved performance of numerical loops
  • Optimized table array and property lookups
  • Added native support for new buffer type operations
  • Code optimizations based on knowing which types are returned from operations
  • Code optimizations based on function argument type annotations
    • This includes support for SIMD operations for annotated Vector3 arguments
  • Native functions are now marked in the Script Profiler
  • Code size of the native functions is included in Luau Heap Profiler
  • Native code is now compiled once and reused for all Parallel Luau actors
  • Breakpoints and stepping are now supported in native scripts
    • Note that native execution is disabled for functions with breakpoints
    • You might not be able to see locals of natively executing stack frames

There are many other small improvements in generated code performance and size and we still have a long road ahead of us to continue delivering optimizations.

Upcoming features

While we improved a lot, some things mentioned in the original announcement are still in progress.

  • By default, we still do not compile any scripts to native code automatically. In order to enable native code generation you need to put the --!native directive on top of your script.
  • We’ve extended our use of argument type annotations as well as adding some automatic type inference, but we are still missing support for type annotations on locals
  • Support for Vector3 has improved for function arguments, but many examples are still not showing performance improvement. We recommend measuring and not using native if the performance doesn’t improve.

Your feedback

If you have already started using native code generation during the Studio Preview, we would love to hear your experience and results with it.
What are the improvements that you are seeing? What are the things that are missing or don’t seem to improve where you expected?
Just like in the Studio Preview announcement, we’re on the lookout for things to prioritize for improvement.

In conclusion, we thank the team members that helped deliver this feature: @DyadicTensor, @lambdazz, @machinamentum, @nullptrchk, @rep_movsb, @welblander, @WheretIB and @zeuxcg!

And a thank you to all who tried the Beta preview out and provided feedback for us to improve.


But when will Vector3 support improve?

  • While we aimed for improvements to be delivered in December, this turned out to be way more challenging than expected. In our updated roadmap, we expect improvements to come in the nearest months with additional improvements later this year.

When will native execution come to the Roblox Client?

  • We are looking into additional opportunities to bring it to more platforms.

Should I enable --!native for all scripts in the experience?

  • We recommend measuring the performance and only using --!native for scripts when you see benefits from this feature. Use Script Profiler to measure the performance, which also supports live Roblox Servers!

This topic was automatically opened after 9 minutes.

Huge update! I’m excited to see where this goes.

I’m curious—if I had a fully typed open source library, would it be worth going through and adding a —!native directive at the beginning of every single module? This module uses type assertions to approximate more complex types for classes as well.

How would this affect performance when important to a project that does not use native code generation? Would it perform the same? Just curious as to whether there’s an immediate benefit to enabling this for third party libraries, since that’s usually where data structures / abstractions live the most in code, and where I could see native generation benefitting the most.


Big day for AxisAngle.

I think I know the answer to this, but is there a world where we’re able to compile part of a script as native but the rest of it normally? It’d be helpful for reducing code size without having to split modules up (which prevents inlining).


That’s essentially what motivated this RFC: RFC: Native Attribute for Functions by aviralg · Pull Request #31 · luau-lang/rfcs · GitHub.


Will the type optimisations ever be possible on tables given that they’re almost impossible to generically type right now, or will this only apply to C userdatas and primitives?

I cry for the day we get bitwise operators.


Amazing! Excited to see where this goes in the future.



The wording for upcoming features is a bit unclear to me. Will all server scripts be natively compiled by default in the future?



Sounds cool, so adding --!native to code will improve performance in some cases?

Edit: to clarify, i dont mean it as some magic attribute to speed up everything, just as a general question



Do you mean lookups that goes through the hash portion by that?


By default (today), native code generation assumes that t.Prop or t[n] refers to a table.
So it’s most important to specify arguments that are either a Vector3 (vector) or some other Roblox type (userdata) so that the fast-path focuses on that instead of failing with a table access attempt.


So, I can put that native flag in all of my scripts, and expect generally better performance results?


We are exploring opportunities to make automatic decisions, but for now, we follow explicit selection by the developer.
Some developers suggested that even with automatic selection, if that happens in the future, manual annotations should remain for better control.


I’ve already played around with the Luau Native Beta and the Performance gains are seriously good, especially for my Server Side Generation Systems.



We have more info in the documentation: Native Code Generation | Documentation - Roblox Creator Hub

Today, we recommend avoiding the annotation for scripts that do not do a lot of work or access a lot of Roblox APIs that take all the time. For example, this feature will not make raycasting faster.

In general, we are trying to make it so the feature is never slower, but some other things to consider is slightly slower server startup time (to compile code in new mode) and memory use.


Similar considerations apply as we’ve mentioned in other comments.

if the library has benchmarks to compare against, I would recommend checking those out.

We do hope that it will be a good case for libraries so that multiple developers can easily get an improvement by updating and to not have to worry about their own code parts, but total memory allowed for native code is limited and shared for all scripts, custom or library.


Consider reading the whole post before asking questions!


Is this implying that this may be a gradual rollout, e.g. support for computers (or a particular OS like Windows or Mac), then eventually support for mobile devices?


Absolutely huge update. Loving this!

Personally Im excited to see Buffer released. Bitpacking data more effectively! Might couple well with future api and much more! Also I may be wrong but this could help with code inferring


Yippee! Now things like this really excite my autistic efficiency-obsessed brain.
Can’t wait for this to come to clients so I can code game logic with performance close to C++.

Now, knowing myself I’m likely gonna use this in almost every single script that does a bunch of math to see how much I can increase the frame rate.

This will especially be useful for AI logic, algorithms and other complex nonsense.

Overall this is a huge and very useful update!