I was hoping it’d be unrolled into something similar to the second
This seems very niche and the code/behavior of the loops wouldn’t even be similar if they were unrolled. In the first you have 1 variable vs 3 in the second. What would happen if you then did something like print(axis)
? By your example it should print just one of them, but looking at the unrolled version you made it looks like you want to print all 3.
It should print each axis 100,000 times (the first example has a nested for loop)
The example I gave was bad, but I think for loop unrolling (not when solely iterating through vector components e.e, I shouldn’t have said that) would be cool
How will this impact ServerSide loadstring? I still have use cases for it, and want to know if it causes similar issues such as setfenv/getfenv.
Server-side loadstring continues to work; if memory serves, scripts that call loadstring
behave as if they called setfenv
/getfenv
- some optimizations are disabled from that point on, but behavior should stay exactly as it used to be.
Yes, this is accurate. The compiler detects idiomatic iteration using pairs
/ipairs
and converts it to fast internal VM constructs that don’t require function calls. Direct use of next
isn’t detected and isn’t advised.
We currently aren’t doing inlining and loop unrolling - both are definitely possible but some internal mechanisms in the compiler aren’t quite ready to support these. Although specifically loop unrolling might not be that interesting that often to justify the code size increase…
Creating attachments (using the tools provided in the model tab) doesn’t work anymore when I have this enabled.
I would like to request that we have the new VM on the test place of Phantom Forces: test place - Roblox
I am pretty confident Phantom Forces is not broken by the VM from my testing, but I don’t know for sure. So test place for a day before the main place would be a good idea.
And I am working on a new game which would benefit greatly from the VM because of all the really heavy Lua, and it’s phone only, so quite the combination for testing!: 💣💣 Merc Sim [ALPHA] - Roblox
I expect that, on phone, the mobile FPS would see a 100% or more performance gain on the new VM, which is very exciting.
game:GetService(“RunService”) this is still a proper way correct?
That’s the best practice way and recommended way for getting services, yes.
So this is the way that’ll work in the new VM, correct?
That’s the correct syntax to get a service, yes.
Once again, nothing is changing in the new VM beyond not being able to call Instances (you won’t be able to do things like game("GetService", "RunService")
). Everything else should work exactly the same.
Would it be difficult to optimize next too? I spent a day converting my game’s 150k line codebase to “in next, foo do” because I had discovered “in pairs(foo) do” was ~1.45x slower on small tables in the old VM.
It wouldn’t be too hard for me to automate converting it back, but I’d like to be able to compare performance and optimally support both VM’s for now.
The addition of library functions like “table.new(int arrayLength, int hashLength)” and “table.find(table array, indexStart = 1, indexEnd = #array, int step = 1)” would greatly optimize my game.
Instructions that optimize “setmetatable({}, foo)” would be helpful, but it may be tricky to optimize cases like this:
local function foo()
local self = {}
setmetatable(self, bar)
return self
end
Also, does the compiler optimize away constant upvalues like “local foo = true” for configurable scripts and “local setmetatable = setmetatable” for code what was optimized for the old VM?
My game (once compiled) uses setfenv to provide fast, lazy access to cross-script functions and data. To use less memory, all compiled scripts share the same environment (if ‘script’ is needed, it is localized on the first line before setfenv is indirectly called via _G.) Would it be possible to get the performance boost of using global functions like “pairs” or “string.find”, while still using a custom environment? Overwriting default library functions using setfenv is not a common use-case.
Locals are almost always preferable to globals, but there are cases where upvalues can hog memory (32 + 8 bytes per function) in scripts with nested functions that use many upvalues. Are there plans to address the memory usage of functions? Could the memory to reference the function’s environment be optimized if the function uses no globals?
This is a bit far-fetched, but would it be possible to recreate function prototypes internally with respect to constant/global upvalues in ModuleScripts, so that these upvalues don’t use 8 extra bytes per upvalue? This wouldn’t be beneficial if ModuleScripts are cloned and re-required though.
Anonymous/in-line functions that lack upvalues/globals could also be optimized. If the same functional-function is created multiple times, they could simply reference the same object in memory in theory, although I’m pretty sure the __eq metamethod doesn’t compares equality if the functions are unique (even if they use the same prototype), and someone’s code may rely functions being unique by testing equality or using them as keys in a table.
There are so many common cases that can be optimized by the Lua compiler, here are just a few examples of optimizations that my custom Lua lexer/simplifier does:
This:
local function foo()
return bar
end
return {foo()}
is slower than:
local function foo()
return bar
end
return {(foo())}
This:
local Methods = {}
function Methods:Foo()
end
is slower than:
local Methods = {
Foo = function(self)
end}
There are also cases where function calls or unused variables that have no “side effects” and can be ommitted from the bytecode entirely. If multi-pass optimizations are too slow for testing in studio, perhaps they can be made when publishing games to the site.
What do you mean by not being able to call instances, like not being able to do this
local function GetBricks(StartInstance) or
local b = Instance.new(“BodyGyro”,seat)
You will not be able to call methods ( Object(Method, …) ) but everything else will work fine.
This is not a lot of memory, I’m not sure what it would be optimizing for or if this can even be reliably optimized. It’s hard to reach even a KB of data alone in function closures unless you’re spamming their creation.
Lua 5.3 actually already has an optimization for this, but it works on all functions by not allocating a new closure if there’s already a previous one cached that either A. has no upvalues or B. refers to the same upvalues. So sounds totally doable.
Like the example I provided. I mean using Instances as functions, not defining functions or calling methods.
Alright, but will those two line of code work in the new VM