I made an effects module that I use in a lot of my games, but decided I was going to release it to the public because why not. It’s not very optimized and a bit CPU expensive because I’m not that good at keeping that stuff low, but that doesn’t mean you can’t use it for emphasis at certain times.
Here’s an example of what it can do:
Oh and here’s the module too.
I have some API stuff listed on the inside of the script, you’ll want to read it for warnings and help, but yeah.
Take note it’s not finished, and again, not perfect, but it’s an optional thing so it’s up to you whether you want to use it or not. Though, if you do use it, credit would be appreciated.
Oh, and one final note. When using this module, be aware that I might accidentally break it from time to time, so use it with caution, lol.
Creating and destroying so many parts creates a lot of overhead.
It would be ideal if we could do this from an actual engine feature.
This does serve as a good view of how good it could look for certain styles.
Neat module regardless.
Not sure if you do this yet, but you appear to be using blocks that spawn/despawn over and over.
To optimize this you could create a 1x1x1 part and put a BlockMesh inside then set the scale to your part’s size to reduce the amount of processing being done. After that you should recycle parts (as a part gets deleted don’t actually delete it, just resize it and put it back at the start of the animation cycle).
Yes. Whenever you’ve got thousands of things, pooling should be the first thing you consider. Memory allocations and object initializations are some of the most expensive operations in most systems. If you know you will need X number of objects in a session, or if they need to appear all at once in a burst spawn, create them up front. If it’s not certain how many you will need, or if the effect will be used at all, and it’s OK to generate them over some amount of time, it probably make senses to create them lazily and add them to a table (the pool) for reuse.
I’ve also found that resizing a part in Roblox has a lot of overhead. So does changing transparency (no transparency is the best, when possible). If you can make something work with a fixed part size, and fixed transparency, you will be able to have many more live instances.
Why do you need a toggle?
By local I don’t mean like using it for an individual client.
Just localizing frequently used functions and reducing table accesses to speed up function calls. The actual functionality of someone using the module shouldn’t change.
Ohhh, that makes a lot more sense! Thought you meant clientside because that’s what Patrickblox (the person I originally created the 3D particles for) was hoping for.
Will take a look at optimizations now (since I’m on Windows now) and see what I can do. Thanks!
Could you elaborate on that a little? I have never really heard of this class before and I’m really confused as on how to use it. I can’t seem to insert any of its inherited classes into the editor either.
Oddly enough, part recycling resulted in the CPU cost being practically the same, except with a wider range below and above of about a whole percentage. My minimized particle settings ranged from .7% to 2.5% instead of its previous 1.2% to 2%. The same happened to the regular example I provide in the module (default settings). It used to range from 4.7% to 6%, but now ranges from 4.3% to 7%.
If I make the cache completely full so it is impossible for it to go empty and force another part creation, the variation range drops, but the average CPU cost is actually about .5% higher than before. I’m not sure if I’m just going through too much work to recycle the particles, or if it actually isn’t helping, but this is definitely not helping aid the performance.
Currently I’m setting the particles in a folder inside the 3D Emitter “class” (which is literally just another folder), meaning they do store themselves in the Workspace in case the emitter is disabled. But I still want to cache particles if the emitter does happen to be only disabled temporarily.
Most likely, the setting of part properties and the actual rendering dominate the time to update. Even with little or no performance improvement, I would expect pooling and re-use to have some benefits in terms of memory usage, especially on lower-end machines and after the emitter has been going a while, but I’d need to look closely at how instances are allocated and cleaned up to say for sure.
It saved about .2% to .3% on the two individual particles I use, definitely a noticeable effect. Probably even more effect on large groups of particles. I imagine it helped even more on the lightning generation because that was really heavy on the math.random. Thanks so much for the help!
This is only especially relevant in regards to network traffic if you’re creating them on the server with the intention to replicate them (though you may still want to do this anyway for this usecase since you’re likely creating a bunch of particles).
From my experience, the savings from moving from PartInstance -> Adorns (in relation to local performance) dwarf any savings from lua assignment order/micro-optimizations.