Protected Calls on ROBLOX cause memory leak, leading to eventual memory exhaustion.
Repro Code:
local X = function() end
for i = 1, 1000000 do
pcall(X)
end
Will lead to excessive memory allocation with no garbage collection.
Protected Calls on ROBLOX cause memory leak, leading to eventual memory exhaustion.
Repro Code:
local X = function() end
for i = 1, 1000000 do
pcall(X)
end
Will lead to excessive memory allocation with no garbage collection.
Doing this in Roblox Studio:
for i = 1, 1_000_000_000_000 do
pcall(function() end)
end
caused my memory usage to go to ~9 GB.
So yes, I too vouch for pcall leaking memory
Using a slightly modified version (continuous loop), my memory usage went up exponentially until it genuinely couldn’t anymore.
This needs to be fixed ASAP, my computer is crying and can’t be put through this again.
That would explain my laptop running terribly when I use like 2-3 pcalls in a RenderStepped …
Curious, the repro in OP doesn’t appear to consume my memory significantly (~700MB). However, @TacoBellSaucePackets’s repro does consistently consume ~2GB on my laptop. I guess I’m fortunate in the sense that this hasn’t crashed my Studio even once so far.
Oh god. I use a solid amount of them for security and safety purposes in my work.
I haven’t reached the point of severe or sudden consumption but its slowly eating memory I suppose.
Edit:
Clarified by zeuxcg in a later reply. This bug shouldn’t affect regular usage, since it really only hits when you use thousands in a loop like this.
Yeah please fix this
I was able to reproduce this on a Mac using latest MacOS and Studio. Using Taco’s repro, my studio
gradually climbed to 100GB and then finally crashed.
Send help
I ended up using so much memory (I memed it a lot more, cc. @Ultimate_Table) that my PC’s video output died. not epic
Just a note, before people start freaking out - this is not quite what it sounds like. It’s not exactly a memory leak, I think we’re hitting the behavior of incremental GC that we tuned to be a bit too incremental?
Try doing this:
for i=1,1_000_000 do pcall(function() end) end
- watch Lua heap go up to 1.5 GB and stay therefor i=1,100_000 do pcall(function() end) end
multiple times - wait a few seconds after each attempt. After a few tries you should see Lua heap start steadily and quickly going down.You can use dev console or, preferably, microprofile “Counters” view (Ctrl+F6 => choose Mode->Counters in profiler menu).
So I strongly suspect that the culprit here isn’t pcall per se - it’s the fact that the code from the repro triggers an instantaneous spike where we allocate a lot of objects, possibly related to these objects being coroutines and/or specifically coroutines allocated through pcall. As a result due to some heuristics in how our incremental GC works right now, we end up not deallocating this memory immediately.
This definitely is a bug in that it shouldn’t happen, but it also shouldn’t normally happen and is somewhat specific to how the repro is setup.
Well, that is good that is not as bad as everyone thought haha.
(I was determined to push my Roblox Studio memory usage as high as I could get with this, and the results are not very pretty as you can see. my pc did not like it)
Hope this gets fixed soon
Shouldn’t this bug also bring into question why Studio is allowed to use that much memory by default? I think it would be more wise to cap the memory usage of Lua under studio unless otherwise changed in the Lua tab of the Studio settings.
Why wouldn’t it be allowed? A program uses as much memory as it needs due to your workflow.
Limiting ram usage does almost nothing and would only hide memory usage issues.
I don’t think Studio needs 100 GB of memory
It doesn’t, but it doesn’t warrant putting in the development time to cap it when practically it would never reach those levels unless there was some actual issue. (In which case the lack of a cap serves to make sure nothing is behaving weirdly) Its not really standard practice to worry about ram usage like that because the program is only going to use what it needs.
The only time I have seen caps are in production workloads where it actually makes sense to when it can scale practically infinite (eg. loading a video into memory for editing)
In this case a cap solves nothing and is virtually useless. Its a round-about solution to a deeper problem. If you feel its higher than it should be, report it.
If my workflow needs 100 GB of memory, then I will want to use 100 GB for studio. My primary system is a Threadripper 2990wx machine with 256 GB of internal memory. Don’t impose useless limits, and it’s not warranted here for a bug report.
This issue directly affects me, and others. My workflow has been directly affected in production, I make a large amount of scripts and objects, so to have this affect me so drastically is very annoying, and hope this can be resolved ASAP.
This specific issue only affects users during high-intensity memory allocation with pcalls. As zeuxcg stated, it is due to how ROBLOX has currently tuned their incremental GC. If your workflow is affected by this issue, I would do a code-review, as your game really shouldn’t be calling thousands of pcalls in a loop. If your 100GB+ super workflow really is having a memory-leak, I would suspect it to be a different memory-leak issue that is specific for massive operations.
That’s not what I meant, I mentioned being able to change/disable the cap. I know people that do rendering like @ScriptOn may need lots of memory to perform task, but bugs/memory leaks that cause studio to crash or cause ones PC to lag due to an unwarranted memory hog would be a good instance of where a cap would be useful for users that don’t want to lose work due to studio or their PC crashing.
What do you think the memory cap is going to do? Cause a crash earlier than before.
Just dropping Lua operations is really bad behavior and would require significant time investment for something just not worth it. This is not how you handle memory issues.