Is there a timeline for native Vector3 integration?
We are aiming to get it done in December.
Of course the legendary AxisAngle can’t wait to get their paws on JITted Vector3…
I still remember old scripts that had entire walls of local variables for messing around with the individual components of CFrames, and they have your name on it. Absolute legend.
Should you use this on regular code too? Like stuff that doesn’t really need to go fast
That’s some interesting trivia
Tried it with Parallel Luau. Basically tripled the performance in my case
Can server scripts benefit from --!native?
client scripts dont work with --!native (as stated in the post) and prob wont work for some time since its complex to implement.
Oh. A lot of the use cases being discussed on this thread are likely client-side computations, so until client support is added we’ll be stuck waiting I guess. Still super exciting!
To be a little bit more precise:
- The current beta applies to client, server and plugin scripts, it’s the same.
- We do have plans to make this feature broadly available long term, including clients on platforms that can support it well (but not on all platforms)
- Having said that, when this feature graduates to a production release (we do not have an ETA for this), you should expect the first version to apply to plugin and server scripts but not scripts that run on production clients. That would follow after the first release.
We may change Studio beta behavior in the future to match, but not sure about that; for now this beta works regardless of script type. If that changes we’ll post an update.
In what cases would it ever be a bad idea to natively compile a function?
Will there be a way to force native compilation anyways for specific functions? Something like an annotation as suggested in an open RFC would work really well.
-- no annotation needed, wont force native compilation
local function A()
end
@native -- should always force native compilation
local function B()
end
That’s outside of scope for this thread but there’s an open RFC on the Luau repo for annotations in that vein. There’s nothing specifically of that nature proposed but it’s one of the obvious use cases.
My original question wasn’t though right? I don’t really like the idea of Luau choosing what to compile natively and what not, I want to be able to choose that myself too.
But yeah the second part was kind of out of scope, I might just get rid of that part actually. Not sure where else I’d be able to suggest this kind of stuff though.
Oh yeah I read it just now, really hope that goes somewhere.
One example is a function that is only executed once and doesn’t contain any loops.
Example of that would be a module returning a lookup table:
return { a = smth, b = smth_else, ...and so on... }
Compiling it to native code will often take more time than running it in a VM.
Yeah I did expect that to be one of the cases where it wouldn’t need to compile natively, but it won’t be like inlined functions where it tries to calculate how much it profits off of native compilation then? I didn’t really like how it did for that for inlined functions without giving developers a way to force it anyways so I’m hoping that might change as well.
This is insane. Let me clarify a little:
I found this video and decided to do it in lua.
C++ took 2.4 seconds
(Repeat loop since for
was slower)
Luau took 5.7 seconds
Native luau took… 2.8 seconds!
Code:
also python took 1 minute and 52 seconds with a while loop
A little while ago I had commissioned someone who was able to implement an algorithm that takes an arbitrary polygon and does its best to split it into a minimum dissection of rectangles. Was excited to run it in native, however, sadly, I only saw a marginal improvement with native enabled.
100 iterations in interpreted:
100 iterations in native:
Also decided to run an ear clipping algorithm cuzynot and saw the same result, only an even more marginal improvement.
100 iterations in interpreted:
100 iterations in native:
I can provide my setup if any staff want to check it out.
Yes, please share the benchmark code, if possible (you can send me a DM).
A server’s hardware is easy to control for Roblox maybe unlike a client which is someone’s computer and they can change whatever they want with it
I sent a dm with the place file