Weak Tables don't work with Instances

Might be a feature request, but weak tables don’t work with roblox instances :confused:

local t = {}
setmetatable(t, {__mode = "k"})

local a = Instance.new("Part")
a.Name = "Weak"
a.Anchored = true
a.Parent = workspace
t[a] = true
a = nil

while wait() do
	print(next(t))
end

after a while / immediately it will start outputting nil

Ideally it shouldn’t garbage collect instances if they aren’t parented to nil / destroyed xd

5 Likes

You’ve unintentionally leaked an implementation detail of the engine—The way that the engine keeps track of Ref[Instance]s so that it can give you back the same reference every time is by storing them in a weak table. Thus, if the only reference you have to a Ref[Instance] is itself in a weak table then all of the references to it are in weak tables and it will get collected.

There’s no easy way to fix that either. That registry of Ref[Instance]s needs to be weak in order for the C++ side to know when the Lua side no longer has any references to an object not in the object hierarchy in order to destroy it.

What you’re effectively asking for is that an extra reference be held onto to the object for as long as it’s parented to the object hierarchy… but you can already do that by listening for the Parent property changing:

local holdOntoReference = {}
local part = Instance.new('Part', workspace)
holdOntoReference[part] = true
local cn;
cn = part:GetPropertyChangedSignal('Parent'):connect(function()
    if not part.Parent then
        holdOntoReference[part] = nil
        cn:disconnect() --> Don't leak a reference to the part through the connection
    end
end)

yourWeakTable[part] = true --> This will work as intended now
21 Likes

Woa thanks, you should like make a thread with all of this super secret stuff ;3

2 Likes

An easier way to do it is to listen on the entire game tree.

local refs = {}
game.DescendantAdded:Connect(function(object)
	refs[object] = true
end)
game.DescendantRemoving:Connect(function(object)
	refs[object] = nil
end)

Note, you can put this anywhere. As long as an object has at least one reference somewhere, it wont be collected.

5 Likes

game.DescendantAdded always seems pretty frightening to me performance implications wise though. Have you actually used it in a real place?

1 Like

According to the profiler, more time is spent on other things:


(circled: DescendantAdded call)

2 Likes

If you add a lot of parts in succession it becomes ~ twice as slow or more maybe (comment your refs code to compare)

local refs = {}
game.DescendantAdded:Connect(function(object)
	refs[object] = true
end)
game.DescendantRemoving:Connect(function(object)
	refs[object] = nil
end)

local start = tick()
local new = Instance.new
local replicatedStorage = game.ReplicatedStorage
for i = 1,50000 do
	new("Part",replicatedStorage)
end
print(tick()-start)
wait(1)
start = tick()
replicatedStorage:ClearAllChildren()
print(tick()-start)

Adding 50000 parts at once will be slow no matter what. In most situations, the difference seems pretty negligible.

2 Likes

On second thought, maybe it’s okay. There’s not really that many operations that result in a whole bunch of instances being added to the hierarchy at once, most of the really large number of instance operations are moving big models between ReplicatedStorage / Workspace, which wouldn’t trigger it.

I would still be worried about using it for mobile players though, since that extra script execution is a lot less free there if you do something like copy a 100 instance GUI into your character… that’s 100 extra times that the engine has to create a new thread and call over into Lua on it if you have that event hooked up.

1 Like

For event listeners, Roblox reuses threads when it can.

Even so, I worry about mobile users only during optimization phases.

1 Like

That’s good to know. Has it always been that way? I’m pretty sure it didn’t do that yet ages ago when I was figuring out how the threading / scheduling worked. (EDIT: It actually can’t be that recent, since it must already have been in place when it would have caused the TweenPosition/Size completion function issues in 2013. Maybe I just didn’t notice it originally.)

Seems way less bad for perf with that in mind. And I imagine there’s already some global listeners on added-to / removed-from hierarchy internally in the engine, so it’s not like you’re making it do extra work by making the connection.

2 Likes

There’s still some additional overhead resuming the Lua thread.

However, that signal is already being used in other cases. For example, I think ChangeService uses it when studio in studio, but not while testing. However, this is in C.

My guess is if you’re using this to track a few instances, CollectionService can help you keep certain instances alive easily, while still maintaining performance.

iirc this happens as long as you don’t yield in the connected function.

2 Likes

What do you mean? CollectionService tags are removed from objects when destroyed. (is useful by the way, you can create a make-shift Destroyed signal with a CollectionService tag)

1 Like

Instead of listening to .DescendantAdded or DescendantRemoved you can bind to a tag and only track references for specific instances you’re interested (tagged by CollectionService).

3 Likes

It’s not really about destruction, but object identity. When pushing instances into VM, it makes sure that the same instance object is represented by the same Lua userdata value.

1 Like

Do you know if it’s actually mainly about identity?

I would have thought that it was mainly about performance and not having to create a new Ref object / mod the ref count for every single indexing “.” operation and that getting proper identity was just a convenient side-effect. You get some quasi-identity from overriding the eq metamethod after all (Except for not being able to use the instances as table keys).

2 Likes

That particular table you’re referring to is there to preserve object identity, mainly for cases when you bounce an instance via C++. Otherwise, the userdata value that comes from a C++ function will be referring to the same object, but a different value, and it would prevent e.g. table lookups using instances as keys.

In fact, BrickColor as table key - this is exactly the lack of object identity for value-types bound via userdata.

3 Likes

Bumping this thread because this is incredibly annoying and shouldn’t be intended behavior. Relying on DescendantAdded/DescendantRemoving to cache instances is stupid and this should be fixed properly.

1 Like

Based on the conversation above, it seems this is an implementation detail of the engine and not necessarily a bug. I recommend filing a feature request if this is blocking your development work, it will be more likely to be considered that way.

More importantly, as noted above, there’s not a great way to fix this. The best the engine could do is give you reference stability for instances that are parented to the DataModel, in which case:

  1. There would be a non-negligible performance overhead.
  2. You’d still have to worry about what behavior there is depending on the particular circumstances
    – whether your thing is parented to the DataModel.