Now, does this imply that it doesn’t make raycasting faster at all?
Could it improve performance though if used on a projectile simulation / custom character physics system that heavily relies on doing raycasts hundreds of times per second for every existing character/projectile?
I’ve considered writing a custom physics engine for humanoids and projectiles which will be HEAVILY raycast based/reliant for about 90% of all the action and then perform a bunch of vector/cframe math.
It doesn’t make raycasting faster because the implementation of :Raycast is already written in C++ (and some handwritten SIMD code), it can’t get any more native than it already is!
What native codegen makes faster is the case where you’re doing significant computational work in your Lua code. If your Lua code just calls an engine function which goes away and does the hard computational work for you native codegen won’t change much.
So, TL;DR: Native codegen could potentially make whatever you use the results of the raycasting for faster but it won’t make the :Raycast call itself faster.
with the type annotation performance improvements, will implicitly given/union types also be fine or do i have to basically annotate everything that isn’t a primitive type (other than vector3 which has been explained before, talking about cframes mainly)
also, any news on vector2 receiving native support too? i know it’s been planned for a while but i haven’t really heard anything about it since then
edit: now that i think about it an optional warning for implicit any types could really help as well so i don’t have to go look through everything to see if i messed up some typechecking
So if I have single script architecture and hundreds of modules, how am I supposed to know what modules to enable --!native on? Script Profiler doesn’t report for modules.
I do wonder though, when does this codegen take place?
Is this at the start of a game? Whenever a script is activated/enabled?
When does it happen for modules? Do I have to require() them first before the codegen happens?
And if so, could it cause a potential hitch or freeze of a few milliseconds if a HUGE module were to be suddenly codegen’d on the spot?
I have so many questions about how it happens.
Is it like JIt’ed or does it pre-compile on Roblox servers and it just sends the already-compiled code over to the Roblox client?
Also heard native somehow uses slightly more memory than byte code? I wonder what the cause for that is.
I used to think that natively compiled programs were smaller because they don’t have to include an whole VM or runtime library.
Yes, that’s one of the reasons why you have to explicitly request codegen with the annotation right now: Doing the codegen adds additional upfront costs at load time so you should only do it on modules where you measure it providing enough performance benefits.
So both local scripts and server scripts will be compiled natively into actual production games now? I have a script that could greatly benefit from this; just making sure it’s now extended for game use now.
I sees, thanks for the informative response!
Now I forgot to ask one more thing.
What’s the current state of codegen when metatables are heavily utilized?
If I (for some reason) wanted to achieve MAXIMUM speed and performance and optimal memory management.
Should I still use metatables for a object-oriented-style programming or should I completely avoid metatables and instead go with 100% functional programming?
Does native codegen have any specific practices involving tables, dictionaries, arrays, etc to get optimal performance?
Besides just making basic math operations faster, I’m really curious where codegen REALLY shines bright.
I’ve been teaching myself to use more typed variables and type checking in my code to sort of “future proof” it in case codegen (or the interpreter itself) gets more optimized for typed variables.
I’ve began coding more or less similar to how I’ve used languages like C++ and C# where everything is usually statically typed and where you can make classes and structs to contain data and functions.
The feature is not available on the clients yet, so we will not compile LocalScript natively.
We will post updates if anything changes in this area in the future.
The feature can be used right now on the servers and in Studio plugins.
We have seen improvements in terrain tools in testing and are planning to use it there in the future.
We support metatables in obj:func calls.
I wouldn’t expect to see much improvement with __index/__newindex, but implementation of that is already pretty good.
I would say especially good improvements are seen with math, bit32 and buffer libraries, and plain tables with no metatables.
We are experimenting with some exciting stuff around Vector2/Vector3/CFrame/Color3, but we need more time to finish that work.
Do you have a module that performs a lot of computation? Start there, try benchmarking it before and after putting --!native and see if it improves performance.
When this will come to the clients then performance will significantly increase for tasks like actually unlock the possibilities for custom things like those hacky volumetric lighting solutions to be a lot more better in terms of performance, unless the biggest overhead to these are the billboards and such…
Another thing is this that some games use to move the leaves and trees with another very hacky super inefficient method that’s extremely heavy in performance as Roblox doesn’t currently have a way to “move” them realistically and performant. With native code generation such thing would perform a lot worse than what it is now. Of course nothing will come close to a native implementation.
We made it significantly faster in the last couple years (I believe almost 10x). It could always be faster still but a lot of the low hanging fruit has been picked at this point.
There are additional APIs we could consider in the future such as a piercing raycast returning all the hit parts along a path though, which would offer more performance for some tasks.
Just out of curiosity, is rayasting dealt on the CPU, or GPU. I’ve been doing a lot of ray tracing via EditableImage and the bottle neck has always been the raycast call. even for 1 ray per pixel
That would be really beneficial for many scenarios. Would love to see this become a reality!