Something character related doesn’t seem to be cleaning the memory. I found 2 ways to reproduce it one is to keep respawning the character the other is to equip and destroy hats on the character. Everything added to the character seems to be cached somewhere in the memory forever. This happens on an empty baseplate as well.
Steps to reproduce:
Add a hat to character
Destroy the hat
MemoryLeak.rbxl (19.8 KB)
There are 2 scripts in ServerScriptService. LoadCharacter works but is much slower.
Yeah, I’m guessing it’s cause characters get removed using remove. Try using CharacterRemoving and destroying the character when it fires (use in a server script).
Yeah this seems to be happening to me too, when I delete objects from player characters server-side. This is an issue I think only Roblox can fix and it must be fixed. This seems like a serious issue.
Does deleting the character both server side and client side causes this issue? Perhaps the character is deleted on the server but because the client has authority over the character it still stays somewhere in memory due to it bypassing Filtering Enabled? Just speculation here. Maybe using an even on the server with CharacterRemoving and firing an event to the client to delete the character will work?
I’m not sure how exactly it works, but I do know that clients can delete anything in their character and that only deleting things from their characters seems to be causing issues. It’s a mysterious little bug. I’m hoping a Roblox engineer can reply soon
I’ve been noticing something similar in Royale High ever since I started tracking memory a few months ago. It slowly creeps up and it seems to be related to old characters not being removed from memory. After going through every single one of the scripts in our game, I couldn’t find any player or character references that weren’t flushed, so I think this is probably a bug.
I ran another test this morning based this thread, and it does look calling LoadCharacter in a loop does indeed leak memory. I’ve graphed the results an attached a repro.
The first screenshot shows when the LoadCharacter loop is not running.
Observe the graph both when toggling between the “Load character loop” button. It remains stable when the LoadCharacter loop is off, and leaks memory when it is on.
@brinkokevin For the hat repro, I think the issue is actually network queueing. If you let the scripts run for a bit, and then disable the script on the server, on the client you will still see hats coming in for several seconds. The reason for this is that the volume of data you are generating from these hats is overwhelming the bandwidth available, so the server ends up making a queue of hats to be sent out to the client. After disabling the script some of the memory is reclaimed.
As a performance strategy, some of these queues end up retaining the memory used for buffering these items as an optimization (if you have a lot of queueing for a user, you may continue to have queuing for that user). If you do start server + start player, you can see the memory increase is on the server, and you can see that server memory usage drop sharply if you close the player studio window.
@Ice7 The issue did not repro for me, I suspect it might be related to specific gear items (or some other difference between our avatars). If you explicitly :Destroy() the old avatar before loading a new one does it help?
Yea, that makes sense why it goes up so much faster when adding hats fast. However, it still seems to increase slowly when adding hats slowly or respawning the character for a longer time. I just tested it with 1 hat per wait() and I can see network/replicator and network/raknet not increasing like it did before. Instances, Signals and PhysicsParts do still increase, which seems to be the same as with LoadCharacter.
Here is an edited version of the repro, I let it run for 30sec and print memory every 30 sec after its done, memory does decrease as the queued hats come but stops decreasing after some time. In this case network/replicator still seems to stay really high up for me.
wait(10)
local hat = script.Hat
print("Before", game.Stats:GetTotalMemoryUsageMb())
for i = 1, 1000 do
wait()
for _, player in ipairs(game.Players:GetPlayers()) do
if player.Character then
for _,x in ipairs(player.Character:GetChildren()) do
if x:IsA("Accessory") then
x:Destroy()
end
end
for i = 1, 100 do
hat:Clone().Parent = player.Character
end
end
end
end
local i = 0
while wait(30) do
i = i + 1
print("After"..i, game.Stats:GetTotalMemoryUsageMb())
end
Edit:
Tested this again with more players and closing the player instances. network/replicator clears up after disconnecting the players. PhysicsParts, Instances, Signals don’t seem to go down.
Hello, the Character memory leak problem is happening to me too. I deduced the problem to me Destroying LocalScripts in players’ Characters client-side but having a server-side Debris.AddItem check to make sure to delete the script just in case it hasn’t been deleted in over one second. I still do not understand why this issue occurs, but I can send some code I use, if that’s necessary. The memory leak problem ends up affecting a client’s memory usage negatively, but not the server’s memory usage.
Hi @brinkokevin! Thanks so much for your report, it helped us to identify an actual bug. Reports like this help up to make Roblox platform better for everyone. Now we have a fix and it will be delivered in one of the following releases.
If you are interested, I will be happy to share the details after the fix will be rolled out.
Please do. This bug had a detrimental effect on client performance and would cause clients to perform horribly over time. Knowing why this happened would satisfy my curiosity.
Sorry for taking so long with the answer, but it takes some time for the code to make it to the production. We at Roblox are doing our best to ensure every change is carefully tested before getting to our users. And getting to production is not the end of the journey as may be aware. Then we gradually turn the changes on for groups of users. When the procedure will be finally over I promise to get back to you.
Happy to announce that bug fix is deployed and works well.
Here is the story what happened in the demo. At first glance it was hard to tell if the leak is really there because working at full power the demo depleted server CPU quota allocated. Most game systems do not free memory immediately after it is not used anymore. Instead they save it for the future use, since reallocating memory may not be cheap. And they actually free memory during some maintenance cycles only when it reaches some tresholds (which may differ for different game systems, environments, situation and other factors). These procedures can be called garbage collection or GC but it should be noted that in some contexts only some of them are meant by this term.
However when CPU is overloaded job scheduler may decide not to plan GC since there’s not enough CPU resources to make the actual work for the game. That is why if you see increasing memory consumption under heavy CPU load it may not necessarily mean a memory leak. How to judge that server is experiencing CPU overload? There are several ways. One of them (maybe the easiest one) is to check out ping. If you have a stable network connection but ping is big and grows that probably means that you server is overwhelmed. That exactly was happening with the demo in topic.
So we needed to dig deeper. I modified the initial demo a little bit to make it possible to adjust the rate of hats create/destroy (a CPU load). The higher the rate the easier it is to detect a leak (if it is there). But too high rate leads to CPU overload and the problems I mention above. Making the rate adjustable made it easy to find the optimal rate on the fly.
In the end it turned out that there was a bug that caused any accessory that were put on a character and than later destroyed not to release all the memory that they were using. That chunk of memory was held during entire lifetime of a character. That chunk was not very big so it was hard to notice it. Thanks to the scenario @brinkokevin provided we was able to detect it and fix the bug.