Hello! Last week we had an internal release, but we still have release notes to share!
Insert.CreateMeshPartAsync
can now be used by regulars scripts as shown by @weakroblox35:
However, when trying to use this function at runtime in either studio or in-game it says it cannot run yet. It doesnât matter whether itâs ran on the client or server, it gives the same error.
Is the function just currently disabled or is there still engine limitations that will be addressed with a future beta or update?
It says Pending
is the patch notes, thatâs why itâs not working yet, it hasnât been added yet.
Not bad updates! Great job guys! When are we going to get a big one though? Like addition of some instances that the community wants for years now and yet we donât have them.
I donât know how to feel about this. What is the change intended to address? I wouldnât otherwise anticipate this behavior of :Destroy()
, had not I stumbled across this release note.
No clue if its an issue now but it started recently but turning my camera left and right has sort of delay on the higher FPS settings with the new client, is this on purpose or a bug?
If a video is needed I am willing to provide.
Originally, calling Destroy didnât clean up whatever the module returned(?) I suspect this change actually gives a chance for it to be cleaned up if what it returns is no longer in use.
For example, a car model with modules under it gets destroyed, all of its return stuff would still linger in memory(?) Now with this, thereâs a chance (which I presume means if the returned stuff isnât being used by other scripts) for it to be garbage collected.
The (?)'s are there because I canât remember where I got that info from, but I believe it to be true.
Itâs been my appreciation that modulescript results have always been collected (and I can observably support this). How I interpret the change, is that :Destroy() will now try to stop the modulescript from persisting its result. And so, despite keeping a reference to the instance, scripters will no longer be able to require the destroyed module, if its result was not maintained elsewhere.
Why I assume this was changed, is so that the cyclic references caused by a modulescript returning a table that references back to the modulescript instance itself, can be broken. But when would this happen in practice?
627 was published before 626? Two in one day? Marvellous!
When will be Editable Mesh with collisions ?
Developers often have issues with module script memory leaks, like in this topic Memory leak: Modulescripts never Garbage Collected after Destroy()'d and all references are removed.
We receive other reports similar to that and many people are actually surprised that Destroy
doesnât free memory.
In practice, not a lot of developers require destroyed modules and those who do (for example by using HDAdmin) require them after calling Destroy, which this change doesnât address as itâs not the common memory leak situation.
Common case is that any ModuleScript result containing any functions is never garbage collected, even if the ModuleScript is destroyed.
local module = {}
module.t = table.create(100000, 1) -- some big data
-- every function holds a reference to 'script' because getfenv can be called on it
function module.foo() return 1 end
return module
Since modules are often used to return functions, this is an issue for experiences cloning modules or just having modules in StarterPlayerScripts
.
This is substantially more problematic than Iâd hoped.
Why I suppose I was a bit put-off, is because integrating obscure edge-case treatments like these into :Destroy()
, seems like more of a patch than a fix. We just canât always expect that instances leaving the world, will even reliably be destroyed to begin with. So what we have, is a growing number of design issues, that can evidently only be solved by this ever expanding generic cleanup method, that only mostly is called when it should be.
Even after this change goes through, one example of a prominent exception, would be fallen models. Currently, contrary to what the property âFallenPartsDestroyHeightâ might suggest, fallen parts do not appear to be destroyed by the engine. This would imply that any physically simulated models containing modulescripts, wouldnât be at all addressed by this measure.
Regarding InsertService.CreateMeshPartAsync, I just want to clarify on whether or not there are any rate limits with this request. Would it be safe to use this method extensively in game?
TL;DR: Fire away, youâre probably not going to hit any limits with sane usage patterns.
Iâm sure there is some rate limit as far as the server requesting assets but youâll almost certainly run into computational limits first especially if youâre using CollisionFidelity = precise because it has to compute the collision geometry for each loaded mesh. Itâs an async API which does the work in the background but the requests to generate collision geometry will start backing up and taking longer to return eventually if you try to load too many meshes all at once.
So if youâre trying to load hundreds of meshes right at startup you may want to reconsider that for performance reasons: If the meshes represent static geometry it would be better to generate the MeshParts for them ahead of time rather than slowing down startup dynamically generating all that collision geometry.
(IIRC the asset request limit is something in the many thousands of unique assets per minute)
I would be using CollisionFidelity = box.
My use case is actually a custom streaming system, in my game I have âpropsâ which are duplicates of some reference object (i.e: a hundred copies of a tree point to a single reference tree); when I stream the world, when a reference object is needed, I send over a serialized version of the tree. The only two things I canât serialize is MeshPart (canât set MeshId in runtime) and SurfaceAppearance (same thing).
So I would be sending this serialized data to the client, reconstructing the object client-side, and hopefully, Iâll be able to call InsertService.CreateMeshPartAsync on the client too.
Currently, I have this streaming stuff working but without serialization, and the bandwidth costs of sending the instances, as well as the clientside and serverside CPU cost of cloning the reference object to be sent (using the parent to PlayerGui hack for selective replication), are a big issue. By sending serialized data thats compressed to my needs, I can avoid both costs; on the client, I still have to clone the objects but that can be amortized over several frames + instance recycling
Ah, yeah. In that case youâre good to go. You would likely run out of memory to store the loaded assets before hitting the raw asset request limit.