I’m trying to make a system where you can sell your whole inventory at once, but currently it freezes studio for 5-10 seconds while selling the inventory. I currently have the max inventory size set to 1000 before it will prevent more items from going in your inventory.
My function:
RS.RemoteFunctions.SellInventory.OnServerInvoke = function(plr: Player)
local priceOf = itemPrice
local totalPrice = 0
local containers = {plr.Character, plr.Backpack}
for _, container in containers do
for i, tool in container:GetChildren() do
if tool:HasTag("Sellable") then
totalPrice += priceOf(tool)
tool:Destroy()
end
end
end
plr.leaderstats.Credits.Value += totalPrice
return math.round(totalPrice * 100) / 100
end
If there is any way to even slightly improve performance please let me know ASAP!
local Destroy = game.Destroy
local GetChildren = game.GetChildren
RS.RemoteFunctions.SellInventory.OnServerInvoke = function(plr: Player):number
local totalPrice:number = 0
for i,v in GetChildren(plr.Character) do
if tool:HasTag("Sellable") then
totalPrice += itemPrice(tool)
Destroy(tool)
end
end
for i,v in GetChildren(plr.Backpack) do
if tool:HasTag("Sellable") then
totalPrice += itemPrice(tool)
Destroy(tool)
end
end
plr.leaderstats.Credits.Value += totalPrice
return math.round(totalPrice * 100) / 100
end
If I did task.wait() for each one it would take 1000 frames to sell if you had a full inventory. 1000 frames/60 fps = 16.66 seconds you would have to wait to get your money.
task.spawn doesn’t create a “thread” in a multi-threading sense, it can still cause every other script and coroutine to hang, as all code runs serially unless you’re using parallel luau and running in a parallel execution phase.
The client shouldn’t be affected by server-side lag, but that invocation might take a while to return.
i believe this method would be best for keeping high performance, but would be very hard to make for a beginner.
but though it would be alot more performant, it does come with its own draw backs.
and can still have the same issue if you manage to get a table of insane lengths
so it still may be of best interest to implement sort of an anti lag delay for it, maybe around one task.wait() per 1000 items.
replication is indeed to biggest problem with tables, i even struggle with table replication.
but i havent really experimented with table limits, but it would still be a good idea to keep a measure like that incase it does somehow manage to happen, maybe with a higher number.
its best to just experiment ot see what works best for the scenario
This can easily be fixed by yielding every x iterations, for example:
local x = 50 --yield every 50 iterations
local counter = 0
for _, container in containers do
for i, tool in container:GetChildren() do
if tool:HasTag("Sellable") then
totalPrice += priceOf(tool)
tool:Destroy()
end
counter += 1
--if counter is a multiple of x(50) wait for a single frame
if counter % x == 0 then task.wait() end
end
end
This basically tells the engine to run 50 iterations, wait, run 50, wait, until it ends. It makes the function not run instantly, but it reduces lag by a lot. If you want to decrease lag you make x smaller and if you want to make it faster you increase it(you must find the sweet spot).