Why does Random behave this way?

Edit seems to happen for all custom Roblox tables as buildthomas pointed out

The methods in Random have different memory addresses but behave the exact same way?

Old, faulty test local max = 2^53 local r1 = Random.new(1) local get1 = r1.NextInteger

local r2 = Random.new(2)
local get2 = r2.NextInteger

print((get1 == get2 and “same memory address for functions”) or “different memory address for functions”) – false

local flag
for i = 1,1000 do
if get1(r1,-max,max) ~= get2(r1,-max,max) then
flag = true

print((flag and “different behavior for functions”) or “functions behave the same”)

local max = 2^53
local count = 1000

local r1 = Random.new(1)
local get1 = r1.NextInteger

local order = {}
for i = 1,count do
	order[i] = get1(r1,-max,max)

r1 = Random.new(1)
get1 = r1.NextInteger

local r2 = Random.new(2)
local get2 = r2.NextInteger

print((get1 == get2 and "same memory address for functions") or "different memory address for functions") -- false

local flag
for i = 1,count do
	if order[i] ~= get2(r1,-max,max) then
		flag = true

print((flag and "different behavior for functions") or "functions behave the same")

Is that supposed to be like that? I’m assuming you meant get2(r2 rather than get2(r1

1 Like

It’s intentional, I’m talking about the methods of the functions, not the actual objects

normally when you make an object it inherits and saves the methods so you don’t create separate ones per object (and this has the same behavior just makes new functions per object I think)

like for example Roblox instances preserve the methods:

print(Instance.new("Part").Destroy == Instance.new("Part").Destroy) -- outputs true

The memory addresses may be different, I wouldn’t know why that is. But I would be quite concerned if they produced different results or behaved in different ways.

1 Like

This holds for all methods of basic data types (not methods on Instances), i.e. for Vector3, CFrame, etc. Not just for Random.


This sounds possibly like a bug. The whole point of : method calls is for having 1 method for all objects that need to share it across an oop instance, so why is it not handled the same here?

My guess is that the basic types are optimized for temporary values, so temporaries are generated on the Lua side whenever you create one of these values. The basic types are constantly changing so caching doesn’t make as much sense here, and for one reason or another they may have decided that regenerating the metatable would be faster than caching it and doing a lookup for an existing one each time one of these is created. Instances are more “permanent” in that they tend to stick around for longer and are created less often, so the overhead of generating a new metatable is worse than looking up and reusing one metatable for all objects.

In short, it all comes down to cache lookup cost vs temporary creation cost. C functions in Lua don’t need anything beyond the C function pointer, so C functions should be pretty lightweight which helps immensely with creating them. I have messed around with the Lua C API a bit, but I haven’t dived into the Lua source code so I’m not too familiar with the exact costs of operations on the C side and I may be wrong.


Just to clarify, are the C functions being cached or are they being recreated with the lua ones?

The C functions are generated at compile time and are either being statically linked during the linking process of the application as its being built, or dynamically loaded at run time using a library. They are only created once and the pointers to those functions are being used and reused when linked to Lua. It is faster to cache them from the C side because the addresses can be baked right into the instructions of the executable, but from the Lua side you need to use some sort of metadata lookup system to find Lua values as it is being run in a VM as bytecode, and Lua code is dynamically compiled and run.

1 Like

This doesn’t seem to be the case.

Vector3 is most certainly not a normal table, so they’re calling a metatable anyways. There is no actual benefit to doing this as opposed to just providing the static pointer, which may be faster because it’s not creating a new metatable or anything of the sorts.

The thing is that the metatables are stored on the Lua side so they would need to dig through the Lua environment to find the metatables to use due to how it handles its data. You need to put values on the Lua stack to use them, and that includes metatables. They would need to be stored somewhere in Lua’s environment to exist, and need to be pulled back out to put onto the stack. It is probably cheaper to create a lightweight temporary on the stack than performing a lookup to save memory, and since these are temporaries and will probably die quickly, memory isn’t as big of an issue. I didn’t say that it wasn’t creating new metatables, I was just saying that creating a new metatable is cheaper than looking up and old one.

Lowering “max” to 2^39 has fixed it for me, 2^40 or higher and it bugs out always returning the same value so as long as you stay within a reasonable numberrange you should be fine.

1 Like

Sorry, the code should work as described now (“functions behave the same”) if you try 2^39

the state is stored in the Random object, not the function, so it was changing the state each time you called on the object irregardless of the function so thats why it was outputting “different behavior for functions”

Err, not quite. That’s not exactly how Lua works regarding metatables, either. Whenever Lua gets/sets to a table, the checking for metatable is actually done right in the C code. So the fact that they currently have a metatable means that they are already calling the metamethods in that, which means making new functions or metatables in this case has 0 reasoning behind it from the perspective of optimizing memory or efficiency as far as I can tell.

Also, metatables are not actually ever held in the stack.
lvm.c, luaV_gettable in Lua 5.1 source


I understand that the metatables have an under the hood implementation that doesn’t work through the stack, and that the stack is only part of an API for the end user integrating Lua into their code base. What I am saying is that in order to reuse metatables they need to reference them somehow. I’m not sure what the exact cost of referencing is vs creating when creating a new temporary, which is the big deciding factor for a decision like this. It makes sense to reuse metatables no matter what for Instances because they have more permanence. Temporaries on the other hand are going to be created and destroyed all the time in things like vector adds, so if performance can be bought by sacrificing memory, then it makes sense to create duplicates. Do you happen to know the performance cost of referring to an existing metatable vs creating one? I’m curious about how they are both done under the hood.

Whatever the case is, creating Vector3s, Color3s, BrickColors, and Vector2s are all an order of magnitude faster than creating Folders. Creating 1 million of the temporaries took about 0.3 seconds on average on my machine, while 100k Folders took around 0.1 seconds on average.

The short answer: in this case, it’s not any faster.
The long answer: in this case, we are already doing a lookup through a metatable to a function, which is a pointer lookup. Doing this lookup would be faster than allocating new memory every time, and since we’re already doing it it’s already implemented. Unless they’re doing some very specific allocations to maximise every tiny microsecond, this does seem rather redundant. Again I’m not saying it’s handled this way for no reason, but I just don’t see one looking at Lua 5.1.

I’m not saying a lookup through a metatable, I’m saying a lookup for a metatable to attach it when an object is initialized. Yes, I already know that metatables are created and attached, and that there is a metatable lookup behind the scenes whenever we access a member. I never said that there wasn’t a metatable attached.

On another note, doing a little bit of further testing, I found this:

> v=Vector3.new(0, 0, 0) print(v.lerp) print(v.lerp)
function: 1B226CD4
function: 1B226D74
>  print(v.lerp) print(v.lerp)
function: 12542074
function: 12542024
>  print(v.lerp) print(v.lerp)
function: 12541F0C
function: 1B226AF4
>  print(v.lerp) print(v.lerp)
function: 19AA443C
function: 19AA41E4

So the functions are definitely changing on a lookup to lookup basis, but that doesn’t say anything about the metatable itself. The metatable very well could be shared across all temporaries, but since we can’t see it cause its locked, that can’t be verified without looking at the raw memory. Looking up methods in instances seems slightly faster than for temporaries.

> t=tick() for i=1,100000 do local t=game.Remove end print(tick()-t)
> t=tick() for i=1,100000 do local t=game.Remove end print(tick()-t)
> t=tick() for i=1,100000 do local t=game.Remove end print(tick()-t)
> t=tick() for i=1,100000 do local t=game.Remove end print(tick()-t)
> t=tick() for i=1,100000 do local t=v.lerp end print(tick()-t)
> t=tick() for i=1,100000 do local t=v.lerp end print(tick()-t)
> t=tick() for i=1,100000 do local t=v.lerp end print(tick()-t)
> t=tick() for i=1,100000 do local t=v.lerp end print(tick()-t)

I did more tests and game.Remove always hovered around 0.022ish, but v.lerp was always around 0.025. Looking up lerp in a Color3 was around 0.023 consistently. Here is local variable creation for just a number as a control to compare to:

> t=tick() for i=1,100000 do local t=6 end print(tick()-t)

These are some very flawed diagnostics. You’re not taking into account the fact that what you’re in fact doing is invoking a metamethod every time, which then returns a function. It’s essentially a call to a C function (not to mention GETTABLE, SELF and LOADK take different amounts of time, so the last comparison isn’t accurate either. Given they’re all doing a get, it only makes sense they’d be calling a metamethod which is shared by all instances of the same class, and the individual checks may take longer depending on how many methods the C side may be looking up for.

I’m most certain the metatable is shared, which we’ve more or less established given it’d be pretty expensive to do otherwise. As for the functions changing, I have no idea what’s going on there but doesn’t seem like it should be happening.

Now this is getting a little off topic, but my theory is just it’s something not strictly related to performance, since there wouldn’t actually be any gain.

The point of the diagnostics was to check for differences between the overhead in looking up a temporary’s method vs an instance’s. Yes, there is going to be overhead in Lua -> C -> Lua, but that should remain pretty consistent across all calls/lookups. I wanted to check for overhead with specifically the function creation on top of the call overhead, but yeah, you are right in that I forgot about complexity from the total number of methods an object has. DataModel has the fastest lookup time despite having the most methods, probably because the temporaries are constantly creating functions.

Yeah, now that I know that the functions change per lookup and not per instance, this does make a lot less sense.

Also, sorry about the miscommunication. We were arguing on completely different notes this entire time. Did you know that it changed per index rather than per instance? I was thinking per instance since I didn’t see anything saying otherwise, but you were always talking about behavior per index, which is where the disconnect happened.

From the looks of it there seems to be a change “per thread”? Since two consecutive prints output very similar results. Or no, that may just be the differences between the executions.