Using too many wait()s simultaneously will cause significant performance dip or in some cases will cause your code to infinitely yield. After testing game.Debris:AddItem(), I realised it’s waiting in intervals similar to wait(), so my concern is, if I use too many game.Debris:AddItem() simultaneously, will that cause some throttling and thus kill my performance?
This was the test I realised:
local lastTime = os.clock()
local p = Instance.new("Part", workspace)
p.AncestryChanged:Connect(function()
print("Destroyed after", os.clock() - lastTime) -- This prints around 0.03
end)
game.Debris:AddItem(p, 0.01)
Should I instead opt to using something like this?
local function Debris(item, period)
task.spawn(function()
task.wait(period)
if item and item.Parent ~= nil then
item:Destroy()
end
end)
end
The function above will never throttle because task.wait() uses heartbeat, and if you have any suggestions to make it better, please share.
I’m not convinced that it’s worth abandoning Debris Service. I just ran a benchmark in Studio with this code:
local debris = game:GetService("Debris")
task.wait(5)
local folder = Instance.new("Folder")
folder.Parent = workspace
local tasktimes = table.create(100)
for test = 1, 100 do
local start = os.clock()
for i = 1, 1000 do
local part = Instance.new("Part")
part.Position = Vector3.new(math.random(-500,500),math.random(-500,500),math.random(-500,500))
part.Parent = folder
task.delay(1, function() part:Destroy() end)
end
task.wait(2)
local stop = os.clock()
tasktimes[test] = (stop - start)
end
local tasktotal = 0
for i = 1, 100 do
tasktotal += tasktimes[i]
end
local debristimes = table.create(100)
for test = 1, 100 do
local start = os.clock()
for i = 1, 1000 do
local part = Instance.new("Part")
part.Position = Vector3.new(math.random(-500,500),math.random(-500,500),math.random(-500,500))
part.Parent = folder
debris:AddItem(part,1)
end
task.wait(2)
local stop = os.clock()
debristimes[test] = (stop - start)
end
local debristotal = 0
for i = 1, 100 do
debristotal += debristimes[i]
end
print("Duration Average for task.delay: "..(tasktotal/100))
print("Duration Average for debris service: "..(debristotal/100))
And got these results:
So it really seems to me like they take about the same amount of time either way.
While this is a good benchmark, it’s running on your presumably powerful machine with no external load on the environement. The real tests will happen in a server full of players and other variables that come into play, and according to your benchmark, task.delay will help ease up a lot of load on the server thus yielding better experience.
I think I remembered the issue wrong. Actually the problem is that if the object is already destroyed (like by another script or by falling to the void) then the code would error.
I agree that debris should not be abandoned but not for the reason you mentioned, for one debris service doesnt throttle and i dont even know if throttling exists anymore, they might have gotten rid of it.
And the if statement check will always pass because the the instance variable will not just be gone after the delay has passed because that’s not how the gc works, i suspect you mean to do something like this:
local function destroy(instance: Instance)
if instance.Parent then
instance:Destroy()
end
end
local function delayedDestroy(instance: Instance, seconds: number)
task.delay(seconds, destroy, instance)
end
but even that would be redundant because the function :Destroy does that already, calling it on an already destroyed instance doesnt error or anything.
The real reason to keep using Debris is because task.delay will create a thread from the function which might be slower than what Debris internally does, another reason would be because it might destroy instances faster since it’s internal? Either way task.delay remains a nice convenience when seldom used, but for many instances the definitely use Debris
I’m not sure why you would need a whole module instead of just doing:
task.delay(2, part.Destroy, part)
Also, some of the posts you linked are outdated, especially the one named “Purpose of Debris Service”. The benchmark they used now shows that Debris is on par with task.delay.
And here I was, thinking Debris was still maintained… Why didn’t Roblox deprecate it sooner? There’s not even a warning in the documentation either…
P.S. this code does create a new thread, though. I think the best course of action that Roblox should take is just update the implementation of Debris. For now, though, Wisp has provided this workaround:
Dont be aggressive now, he recommends it because it’s better than Debris not because it’s the best alternative, creating a function and a thread just to destroy a single instance can definitely be avoided, and if done properly will result in major performance gains
The module is supposed to be an updated 1:1 substitution for DebrisService, which doesn’t need to spawn new threads since everything is clumped together and scheduled while avoiding the legacy roblox scheduler.
You don’t need to use DebrisGobbler or any other module for that matter, just I’d prefer to not reinvent the wheel… Unless the wheel is broken like DebrisService is
In fact, legacy DebrisService is so bad to the point where creating a new thread is more efficient. Wow…
There is no reason to create a new thread to asynchronously destroy one single item when you can just chuck thousands of instances into a queue to be destroyed synchronously.
local DebrisGobbler = require(game:GetService("ReplicatedStorage").DebrisGobbler.DebrisGobbler)
local start = os.clock()
for i = 1, 50000 do
local part = Instance.new("Part")
DebrisGobbler:AddItem(part, 5)
end
print("DebrisGobbler: ".. os.clock() - start)
task.wait(5)
local start = os.clock()
for i = 1, 50000 do
local part = Instance.new("Part")
task.delay(5, part.Destroy, part)
end
print("task.delay: ".. os.clock() - start)
You only benched the difference between DebrisGobbler’s insertion and task.delay, there’s a fundamental difference between the two, you’ve made 50k coroutines when you did task.delay which is so unnecessary, DebrisGobbler doesnt do that and that’s what makes it a lot better when dealing with many instances.
Also, a simple bench like that doesnt take into account outliers and whatnot.