As a Roblox developer it is impossible to know when your table/“object” is garbage collected AND “destroyed” in the general scenario
Having the __gc metamethod would help prevent bugs too in case developers forget to call their Destroy method every time their objects need to be destroyed
Also I claim it is faster than calling the method through the lua side although please explain if this is true or false (my reasoning is that calling c functions is way faster than lua ones from the lua side)
Yeah, what Kampfkarren said. Because the table is no longer is memory, it isn’t tied to any script object and thus there is no way to identify what the security context level of the thread is. This would allow users to break out of the Lua sandbox.
Either way, Roblox is usually pretty smart about handling the garbage collection of Instances on the Lua side of things. If a table is garbage collected because it is confident that no other Lua code can reach it, it will garbage collect the InstanceBridge userdata references in the table. If you keep connections alive in other threads deliberately, it is your responsibility to track and dispose of them appropriately.
@Maximum_ADHD can’t roblox assume context level based on intiial thread? or can’t they just assume normal script context level for tables (since as of now I assume __gc doesn’t exist for roblox corescripts either so wouldn’t matter to roblox?)
That’s fair I suppose, but I don’t think it’s unreasonable to use external tracking for the removal of an object. If you’re storing references to the object across multiple scripts, then you could perhaps write something similiar to a shared_ptr to figure out when it should be freed up from memory.
Wouldn’t you still need to do something like shared_ptr:Release() or shared_ptr:Destroy() etc so you still have the same problem of having to manually Destroy something
Yes, but the point is that there would be a single point of truth to what is holding the object in memory, and you can track down what isn’t freeing it up more directly by using debug.traceback.
This isn’t documented anywhere yet, but debug.traceback actually has multiple overloads:
`string debug.traceback()`
-
Returns a stack trace of the calling thread.
`string debug.traceback(string msg)`
-
Returns `msg` followed by `\n`, followed by a stack trace to the calling thread.
`string debug.traceback(string msg, int level)`
-
Returns `msg` followed by `\n`, followed by a stack trace to the calling thread starting at the specified callstack level.
`string debug.traceback(thread co)`
-
Returns a stack trace of the specified coroutine thread.
`string debug.traceback(thread co, string msg)`
-
Returns `msg` followed by `\n`, followed by a stack trace to the specified coroutine thread.
`string debug.traceback(thread co, string msg, int level)`
-
Returns msg followed by \n, followed by a stack trace to the specified coroutine thread starting the specified callstack level.
That’s cool, I never knew about debug traceback in the first place xd
But how would it work with do / end blocks or other scope makers besides functions (so ones that don’t create a callstack level)?
local t
coroutine.wrap(function()
do
t = coroutine.running()
end
coroutine.yield()
end)()
print(debug.traceback(t))
(the output of that is the same as if the yield were inside the do end block except for the line number*)
Also there’s no way to check when traceback changes
so wouldn’t it just be better to embed a weak table into each shared ptr object and check at intervals when the key for the corresponding table/“object” is removed (at which point call my own __gc ‘metamethod’)
but a while true do wait(sampleInterval) checkAllSharedPtrs() end is a little messy