Yup! Pretty much the only code that should break is awful hacky stuff that never should have been used in a production game, so chances are your stuff will be absolutely fine. If you do find something broken tho, and you’re not sure, it’s better to file a bug report - if it doesn’t work in the new VM, chances are it was broken unintentionally.
Thank you for clarifying this. I was confused on what will be removed and thought it would mess me up. Thank you once again.
I have a throttling system for my neural network trainer that runs a Heartbeat:Wait() every time more than 1/60 of a second have passed. Here’s the before & after VM pics.
It almost looks like the new VM runs it slower… The operations are just a series of table indexes, comparing numbers, and multiplication.
Robeats (https://www.roblox.com/games/698448212/NEW-SONGS-Club-RoBeats-MMO-Rhythm-Game), a very lua-heavy game, is running (lua-wise) ~30% faster. Hooray!
Pre:
Post:
Can we get this turned on for live servers/clients when available?
return Modify(Instance.new(ClassType), Properties)
Would this count as a syntax?
or local function Make(ClassType, Properties)
Oh so increased security too? That’s excellent.
I don’t see an issue with that. The only thing that will no longer work is passing methods as calls:
game("GetService","RunService")
will error instead of acting like
game:GetService("RunService")
Anything that is done in pure lua (not calling methods of Instances) will not change.
So both, local function Make(ClassType, Properties) and
return Modify(Instance.new(ClassType), Properties)
Should work in the new VM?
Yeah? I mean, I can only guess what those functions do. Anything you made in Lua without methods that are functions you wrote will work the same.
local function ModifyInstance(instance,properties)
for i,v in next, properties do instance[i] = v end
end
local function MakeInstance(class,properties)
return ModifyInstance(Instance.new(class),properties or {})
end
No methods called, so nothing different.
On a similar note to @Elttob, I tested my current voxel based project, and saw a HUGE increase in speed!
My 200 x 50 x 200 maps which took 2.5-3 seconds to load with the old VM are generating in 0.3-0.35 seconds now. I was just about to start some optimizations on that, but with this new VM I don’t think I’ll need to. The performance impact of this is insane!
I also tested with a new 250 x 80 x 250 map, and the new VM reduced times from ~10s to under 2. I can’t wait for this feature to go live.
I am wondering if roblox can make my chassis run smoother on the new VM while I improve it as well.
Here is a screenshot:
As you can see, ClientPhysics is the one that causes the lag
Here are the specs of my computer (I am an AMD Boy for those wondering :P)
Here is a link to the chassis:
I’d like it if next
were as fast as ipairs
over both arrays and hash maps:
local t={}
for i=1,1e6 do
t[i]=math.random()
end
local s=tick()
for _,v in ipairs(t)do
end
print(tick()-s)
local s=tick()
for _,v in next,t do
end
print(tick()-s)
local t={}
for i=1,1e6 do
t[math.random()]=true
end
local s=tick()
for v in next,t do
end
print(tick()-s)
Right now I get:
0.0029196224212646
0.019444704055786
0.058927059173584
That is because the iterator function ipairs returns is much simpler than next.
ipairs is equivalent to this:
local tab = {1,2,3,[4]=nil,[5]=5}
local i=1
while i do
local v = tab[i]
i=i+1
end
Next will go over all indices contained inside the internal hash array on the C side, while ipairs will go from 1 to whatever the last non-nil array index is present, ignoring all other types.
In fact, it’s even better to use a for i=1,#tab loop instead of ipairs as it is faster and gives almost the same result.
After some experimenting it looks like pairs
and ipairs
are just much faster than anything you can produce in Lua, even using equivalent code. I’d imagine this is some inline or C-side optimization that probably can’t be helped.
Code below
local t = {}
for i = 1, 1e6 do
t[i] = i
end
local mk1 = tick()
for i, v in ipairs(t) do
end
print("ipairs:", tick()-mk1)
local mk2 = tick()
for k, v in pairs(t) do
end
print("pairs:", tick()-mk2)
local mk3 = tick()
for k, v in next, t do
end
print("next:", tick()-mk3)
local mk4 = tick()
for i = 1, #t do
local v = t[i]
end
print("numeric:", tick()-mk4)
local function custom(tbl)
return next, tbl, nil
end
local mk5 = tick()
for k, v in custom(t) do
end
print("custom pairs:", tick()-mk5)
local function iter(a, i)
i = i + 1
local v = a[i]
if v then
return i, v
end
end
function custom2(a)
return iter, a, 0
end
local mk6 = tick()
for k, v in custom2(t) do
end
print("custom ipairs:", tick()-mk6)
The code above netted these times:
ipairs: 0.0066218376159668
pairs: 0.0081737041473389
next: 0.038437843322754
numeric: 0.012376546859741
custom pairs: 0.036245346069336
custom ipairs: 0.056918144226074
While this isn’t the best example for pairs
/next
/custom pairs, it does demonstrate the speed differences well enough. So, the take away here is that next
is terrible and you shouldn’t use it as an iterator except if it’s a choice between next
and a custom implementation of ipairs
. An interesting piece of data is that a custom iterator that uses next
is somehow faster than just inlining it. I’d be interested to know how that ends up happening.
For some additional data, I wanted to see how it performs when simply called (which is how you should be using it), so I went ahead and called it 1,000,000 times:
local t = {}
for i = 1, 1e6 do
t[i] = i
end
local mk1 = tick()
for i = 1, 1e6 do
next(t)
end
print("next calls:", tick()-mk1)
Which gave out:
next calls: 0.030120611190796
That’s still faster than iterating with it, though not by much. It really makes you go .
The optimization taking place is probably looking specifically for the style of
for stuff in unpack_iterator(t) do
end
It’s not to say next
is any slower than ipairs
or pairs
in the sense of the function and iterators themselves, it’s that the way it’s used (for stuff in next, t, i
) is not idiomatic and probably doesn’t get an optimization pass applied, so it ends up calling the function over and over as the old VM did. The optimization is probably purely looking for the simplistic Lua generic iterator format.
The global variable optimizations aren’t really present yet, so we might have to wait for that
Would global scripts and global variables work in the new VM?
Global variables should work fine in the new vm, although it’s recommended to use local variables, unless you have no choice (which is almost never the case).
Not sure what you mean by global scripts though.
Maybe they might have been talking about shared/_G? Don’t see any reason why they’d remove either of those either tho.