Release Notes for 626

I don’t know how to feel about this. What is the change intended to address? I wouldn’t otherwise anticipate this behavior of :Destroy(), had not I stumbled across this release note.

8 Likes

No clue if its an issue now but it started recently but turning my camera left and right has sort of delay on the higher FPS settings with the new client, is this on purpose or a bug?

If a video is needed I am willing to provide.

6 Likes

Originally, calling Destroy didn’t clean up whatever the module returned(?) I suspect this change actually gives a chance for it to be cleaned up if what it returns is no longer in use.

For example, a car model with modules under it gets destroyed, all of its return stuff would still linger in memory(?) Now with this, there’s a chance (which I presume means if the returned stuff isn’t being used by other scripts) for it to be garbage collected.

The (?)'s are there because I can’t remember where I got that info from, but I believe it to be true.

7 Likes

It’s been my appreciation that modulescript results have always been collected (and I can observably support this). How I interpret the change, is that :Destroy() will now try to stop the modulescript from persisting its result. And so, despite keeping a reference to the instance, scripters will no longer be able to require the destroyed module, if its result was not maintained elsewhere.

5 Likes

Why I assume this was changed, is so that the cyclic references caused by a modulescript returning a table that references back to the modulescript instance itself, can be broken. But when would this happen in practice?

4 Likes

627 was published before 626? Two in one day? Marvellous!

6 Likes

When will be Editable Mesh with collisions ?

5 Likes

Developers often have issues with module script memory leaks, like in this topic Memory leak: Modulescripts never Garbage Collected after Destroy()'d and all references are removed.
We receive other reports similar to that and many people are actually surprised that Destroy doesn’t free memory.

In practice, not a lot of developers require destroyed modules and those who do (for example by using HDAdmin) require them after calling Destroy, which this change doesn’t address as it’s not the common memory leak situation.

Common case is that any ModuleScript result containing any functions is never garbage collected, even if the ModuleScript is destroyed.

local module = {}

module.t = table.create(100000, 1) -- some big data

-- every function holds a reference to 'script' because getfenv can be called on it
function module.foo() return 1 end

return module

Since modules are often used to return functions, this is an issue for experiences cloning modules or just having modules in StarterPlayerScripts.

11 Likes

This is substantially more problematic than I’d hoped.

Why I suppose I was a bit put-off, is because integrating obscure edge-case treatments like these into :Destroy(), seems like more of a patch than a fix. We just can’t always expect that instances leaving the world, will even reliably be destroyed to begin with. So what we have, is a growing number of design issues, that can evidently only be solved by this ever expanding generic cleanup method, that only mostly is called when it should be.

Even after this change goes through, one example of a prominent exception, would be fallen models. Currently, contrary to what the property ‘FallenPartsDestroyHeight’ might suggest, fallen parts do not appear to be destroyed by the engine. This would imply that any physically simulated models containing modulescripts, wouldn’t be at all addressed by this measure.

4 Likes

Regarding InsertService.CreateMeshPartAsync, I just want to clarify on whether or not there are any rate limits with this request. Would it be safe to use this method extensively in game?

3 Likes

TL;DR: Fire away, you’re probably not going to hit any limits with sane usage patterns.

I’m sure there is some rate limit as far as the server requesting assets but you’ll almost certainly run into computational limits first especially if you’re using CollisionFidelity = precise because it has to compute the collision geometry for each loaded mesh. It’s an async API which does the work in the background but the requests to generate collision geometry will start backing up and taking longer to return eventually if you try to load too many meshes all at once.

So if you’re trying to load hundreds of meshes right at startup you may want to reconsider that for performance reasons: If the meshes represent static geometry it would be better to generate the MeshParts for them ahead of time rather than slowing down startup dynamically generating all that collision geometry.

(IIRC the asset request limit is something in the many thousands of unique assets per minute)

9 Likes

I would be using CollisionFidelity = box.

My use case is actually a custom streaming system, in my game I have “props” which are duplicates of some reference object (i.e: a hundred copies of a tree point to a single reference tree); when I stream the world, when a reference object is needed, I send over a serialized version of the tree. The only two things I can’t serialize is MeshPart (can’t set MeshId in runtime) and SurfaceAppearance (same thing).

So I would be sending this serialized data to the client, reconstructing the object client-side, and hopefully, I’ll be able to call InsertService.CreateMeshPartAsync on the client too.

Currently, I have this streaming stuff working but without serialization, and the bandwidth costs of sending the instances, as well as the clientside and serverside CPU cost of cloning the reference object to be sent (using the parent to PlayerGui hack for selective replication), are a big issue. By sending serialized data thats compressed to my needs, I can avoid both costs; on the client, I still have to clone the objects but that can be amortized over several frames + instance recycling

6 Likes

Ah, yeah. In that case you’re good to go. You would likely run out of memory to store the loaded assets before hitting the raw asset request limit.

8 Likes

@tnavarts Lately ever since this update I have been having a weird issue on macOS.

When I play dusty trip after I reach around 13000m or so there everything just bugs out becomes fully black. You would think that this is a dusty trip problem but my entire client freezes, no longer responds and the app doesn’t even close I just have to force quit it.

This has happened to me around 4 times now and has happened every single time that I reach around 10000m or so. Seems to have an issue when I get far away from spawn or something???

And hell lord how am I even supposed to debug this and provide a client log to really figure out what the issue is this. I already have tried fully deleting and reinstalling the Roblox app.

4 Likes

I am very happy to hear that module scripts will be garbage collected now. Thank you team, this will help A LOT.

(image from 2 months ago, in a server that’s been going on for ~10 hours)

3 Likes

This isn’t quite the takeaway. I should clarify, that ModuleScripts were always garbage collected. Unfortunately, there’s an edge-case, in that unsolvable cyclic references are formed by modules which return a reference to themselves. While this is dangerous by itself, it generally wouldn’t be too problematic, if not for the major oversight, that any functions the module returns, hold a reference back to the module instance, due to the script variable being present in the global environment.

A more generally efficacious solution, would be to simply undefine this variable. By doing this, and taking care not to reference the module instance somewhere it might (directly or indirectly) be caught in its return, you’d effectively avoid this risk of memory leakage entirely—regardless of whether or not the module is ever destroyed.

The change covered in this release note, is really at-most a patch, which will only address the issue in most cases—those where destroy will reliably be called following the module being required, and where no other references to the result persist at the time of this happening. Even if scripters plan to rely on this behavior, they should still be aware of the potential pitfall.

8 Likes


I’ve been fiddling around with CreateMeshPartAsync from the client on a live game (enabling the fflag), I’ve noticed that a CreateMeshPartAsync call is quite expensive (give or take an order of magnitude more expensive than creating Instances or cloning and parenting existing MeshParts).

Is this something that could be optimized, or is it just the consequence of having to parse the received mesh data? My meshes are using CollisionFidelity.Box and RenderFidelity.Precise (and CanCollide, CanQuery, CanTouch are set to false and Anchored is set to true…)

2 Likes

It is a little bit surprising that it has that much synchronous work to do, I wouldn’t have expected it to be that much of a difference. It is doing the real expensive work on the background thread, otherwise you would see actual frame drops. I’ll see if I can profile when I have time.

1 Like

I like the updates here but i don’t see mouse side buttons as inputs :sob:

1 Like

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.