If you add a lot of parts in succession it becomes ~ twice as slow or more maybe (comment your refs code to compare)
local refs = {}
game.DescendantAdded:Connect(function(object)
refs[object] = true
end)
game.DescendantRemoving:Connect(function(object)
refs[object] = nil
end)
local start = tick()
local new = Instance.new
local replicatedStorage = game.ReplicatedStorage
for i = 1,50000 do
new("Part",replicatedStorage)
end
print(tick()-start)
wait(1)
start = tick()
replicatedStorage:ClearAllChildren()
print(tick()-start)
On second thought, maybe it’s okay. There’s not really that many operations that result in a whole bunch of instances being added to the hierarchy at once, most of the really large number of instance operations are moving big models between ReplicatedStorage / Workspace, which wouldn’t trigger it.
I would still be worried about using it for mobile players though, since that extra script execution is a lot less free there if you do something like copy a 100 instance GUI into your character… that’s 100 extra times that the engine has to create a new thread and call over into Lua on it if you have that event hooked up.
That’s good to know. Has it always been that way? I’m pretty sure it didn’t do that yet ages ago when I was figuring out how the threading / scheduling worked. (EDIT: It actually can’t be that recent, since it must already have been in place when it would have caused the TweenPosition/Size completion function issues in 2013. Maybe I just didn’t notice it originally.)
Seems way less bad for perf with that in mind. And I imagine there’s already some global listeners on added-to / removed-from hierarchy internally in the engine, so it’s not like you’re making it do extra work by making the connection.
There’s still some additional overhead resuming the Lua thread.
However, that signal is already being used in other cases. For example, I think ChangeService uses it when studio in studio, but not while testing. However, this is in C.
My guess is if you’re using this to track a few instances, CollectionService can help you keep certain instances alive easily, while still maintaining performance.
iirc this happens as long as you don’t yield in the connected function.
What do you mean? CollectionService tags are removed from objects when destroyed. (is useful by the way, you can create a make-shift Destroyed signal with a CollectionService tag)
Instead of listening to .DescendantAdded or DescendantRemoved you can bind to a tag and only track references for specific instances you’re interested (tagged by CollectionService).
It’s not really about destruction, but object identity. When pushing instances into VM, it makes sure that the same instance object is represented by the same Lua userdata value.
Do you know if it’s actually mainly about identity?
I would have thought that it was mainly about performance and not having to create a new Ref object / mod the ref count for every single indexing “.” operation and that getting proper identity was just a convenient side-effect. You get some quasi-identity from overriding the eq metamethod after all (Except for not being able to use the instances as table keys).
That particular table you’re referring to is there to preserve object identity, mainly for cases when you bounce an instance via C++. Otherwise, the userdata value that comes from a C++ function will be referring to the same object, but a different value, and it would prevent e.g. table lookups using instances as keys.
In fact, BrickColor as table key - this is exactly the lack of object identity for value-types bound via userdata.
Bumping this thread because this is incredibly annoying and shouldn’t be intended behavior. Relying on DescendantAdded/DescendantRemoving to cache instances is stupid and this should be fixed properly.
Based on the conversation above, it seems this is an implementation detail of the engine and not necessarily a bug. I recommend filing a feature request if this is blocking your development work, it will be more likely to be considered that way.
More importantly, as noted above, there’s not a great way to fix this. The best the engine could do is give you reference stability for instances that are parented to the DataModel, in which case:
There would be a non-negligible performance overhead.
You’d still have to worry about what behavior there is depending on the particular circumstances
– whether your thing is parented to the DataModel.
There would be a non-negligible performance overhead.
Yes, but there is what most likely is an even larger overhead by keeping track of instances under DataModel by either DescendantAdded/Removed or other connections.
Is this fix really so complicated and performance-damaging to add? From my perspective, it just seems like adding a special case for collecting instances (checking if parent ~= nil).
Unfortunately yes. The key thing to recognize is that when you write: print(game.a.b.c.d) you’re allocating 4 different object references, and doing the additional tracking to hold permanent references while stuff is in the DataModel would have to happen on every single one of those.
You also add a bunch of memory pressure, because as soon as you’ve gotten an object reference, to ensure reference stability, that reference would have to be held until the object gets removed from the DataModel, vs right now it can be collected right away if you didn’t end up explicitly holding onto it.
I guess this is best kept in its current state then. Any ideas for alternatives that don’t require connections or caching instances in a messy way? At the moment it seems like the only ways to fix this issue have performance issues (either in the engine with C or by the user with Lua).
Every time a character is added a model (‘Appearance’, filled with custom clothing) is added to it and I am adding that model to an ignore list used by raycasting so that rays do not collide with the clothing parts. The ignore list is a weak table so I don’t have to worry about connections or checking if the characters have been removed.
Edit: Just remembered the CanQuery property now exists so I’ll probably switch to that, but this was my original use case.