This line of code is really breaking my computer, make sure to use task manger to close studio or it will show a purple screen randomly lol
I am sure it will be fine, I just reboot my computer

This line of code is really breaking my computer, make sure to use task manger to close studio or it will show a purple screen randomly lol
this is the biggest update after the haptic feedbacks, roblox studio is finally becoming bigger and better
Looks really cool but please please please backport this (and other things such as directional emission) to the legacy Sound instance. I find it frustrating to have to use at least three instances when just one could do the trick.
We need this for the old sound instance, the new audio api is too complicated and it takes just too much time to play 1 sound
Wow! that’s a Great Update, Can’t wait To Try this Masterpiece. huge Respect To the people who Made this Masterpiece Possible💪
Oof; does this happen in any placefile on your end? Haven’t seen this behavior, but we definitely want to get that fixed
If you have an rbxl that chronically encounters the freeze please send it our way
ok this is peak but can yall please enable AudioAnalyzer:GetSpectrum
on AudioDeviceInput based streams, or at least clarify why it’s disabled?
This only seems to be happening in one place(testing place for a gun system). What might be causing this is that there are a ton of audioEmitters that are created and destroyed after a couple seconds long sound plays(Hitsounds, shootsounds).
makes me want to switch from Sound to Audio,.,.,.,.,.,. it’s too close to what i just heard from triple A games,.,.,.,.,.
anyways unrelated to this update but how performant is Audio actually??? my game usually clones sound a lot for firing the gun, especially weapons that are for players too cuz I’ve been having reports of how sometimes Sound
doesn’t play in the beginning or just cuts out after repeating it for amount of times.
Audio is quite a bit more performant than Sound (+ you can have as many of them playing as your CPU can handle, unlike Sound which caps out at somewhere past the 300 mark i believe?)
Been playing around with this. It’s cool though i dont think the current state of acoustic simulation is exactly amazing. I sense potential here though im not sure how much of that potential will be achieved considering the current vision of roblox to place major restrictions on developers alongside major limitations on the tech itself that make it barely usable in anything but the most basic of cases imaginable. Even in those cases the cut corners would be blatantly obvious.
Sadly right now it’s insanely easy to break the “illusion” of relatively accurate acoustic simulation. It is something i guess but right now the only “good enough” scenario is if an audio emitter is going to permanently be behind a wall. The moment the player can interact with that audio past walking next to a singular wall is the moment the whole simulation falls apart sadly.
Secondly, the performance. 4 audio emitters are enough to cost my 14700k around 4 ms on all 8 threads studio offers. I’m really not sure how much i could do with acoustic simulation when its overhead is this high on such a cpu. I believe i would be pushing my luck even having one simulated sound on mobile let alone have multiple. I’m really not sure if i could even use simulated audio at all with this sort of overhead… Perhaps fine tuning our maps to squeeze a bit more performance out of this can help but eh, most roblox maps are not that dense in instances and ESPECIALLY not dense in instances around sound sources.
So with all of this, here are a couple ideas. I hope they are not entirely irrelevant to the people working on this.
Expand on the simulation fidelity options. Instead of having only like 2-3 “low” and “high” fidelity options, there should be some sort of more advanced options that let us fine tune most parameters. This can help developers compromise on other qualities and features of an audio source that may not actually be needed all while not sacrificing on the ones that ARE actually needed. Perhaps also add some options regarding update rates. I can see scenarios in which trying to recalculate audio sources every frame may not really be needed.
Add acoustic simulation baking. Most roblox maps are static and so are their audio sources. While yes, you can technically already do this with a couple audio effects, i believe acoustic simulation can overall end up being more accurate, no? Correct me if im wrong. Perhaps baking could offer us much higher audio fidelity options?
4 audio emitters are enough to cost my 14700k around 4 ms on all 8 threads studio offers
We are running acoustic simulation on background threads, opportunistically
These chunks of ultra-wide parallel-computation happen pretty much regardless of the number of emitters, and shouldn’t affect your framerate; e.x. 35 emitters take roughly the same amount of time as a small handful
If you are finding that this does hit your framerate, then that’s a bug we want to address
Kinda hard to test on studio considering the massive overhead but here’s what i gathered.
I see. Personally im seeing that higher numbers of emitters do add up. Here are a couple micro profiler logs showcasing that.
0 Audio Emitters:
microprofile-20250502-150625.html (3.1 MB)
1 Audio Emitter:
microprofile-20250502-150803.html (5.3 MB)
2 Audio Emitters:
microprofile-20250502-151017.html (6.0 MB)
4 Audio Emitters:
microprofile-20250502-151131.html (7.8 MB)
And yes, my framerate is actually affected too. One simulated audio emitter is capable of dropping my framerate from around 200 (most i can hit in that village map in studio) to around 160-150.
More emitters will make the performance problem worse of course. Furthermore, fast camera movements near emitters can generate large performance swings that drop my framerate to even below 100. This is again with just one emitter. Heres a micro profiler log of that:
microprofile-20250502-151539.html (4.4 MB)
Something else i have noticed is that acoustic simulations will run even if there is no audio playing. There just has to be an audio emitter instance capable of doing those simulations.
Personally im seeing that higher numbers of emitters do add up
Yes, of course more emitters means more computation – but the algorithm is supposed to cap out and start deferring some work (so far-away emitters start getting less accurate, or more latent – instead of hitting your framerate)
Thanks for attaching these microprofiles! We’ll dig into it
Ya ya! Though not sure about the “less accurate” when further away part. Personally ive noticed a pretty hard quality shift when far away enough. Simulation seems to just stop when your camera is far away enough from an audio source regardless of how clearly you can hear that audio…
Also even with proper fixes to the current performance issues, im not sure how much you could get out of them on mobile… Considering the standard client gets capped out at 3 threads and the fact that mobile devices are worlds weaker than any desktop cpu, its hard to imagine something like this functioning properly without MAJOR drawbacks that would make acoustic simulation as good as being disabled.
Ok yikes; just found out one of our performance-improvement flags got undone
Getting that re-enabled now, should improve things
I mean so far the performance is exactly the same. Do these fflags take time to apply?
Yes possibly – they also only kick in after restarting studio
please do consider this, not every game has a super dynamic constantly changing world.
So like, what about the regular Sound
instances though? Are they just left in the dust now for all these new features?