Interesting updates across the board today. Future is looking bright!
I’m still holding out for hope that we will get that skybox overhaul soon plsplsplsplspls ROBLOX you would conjure so much goodwill with us devs if you did that
I’m curious what kind of overhead this all brings. Something like this certainly can’t be cheap considering the pathfinding aspect of it.
What sort of distance limitations does it have?
We’ve tried to make the algorithm minimally-intrusive – there aren’t any limitations on diffraction/occlusion distance, but you can expect lower-quality pathing from far-away emitters
On the other hand, if you use Custom DistanceAttenuation or AngleAttenuation curves that shorten how far AudioEmitters can be audible from… you’d be doing us (as in, the engine) a solid
Since quality will undoubtedly decrease with scale , but some people don’t need changing real time simulation , any plans to make tools that can bake simulation paths for a scene rather than doing it all at runtime? Many people might be using this on static environments and could benefit greatly from that
My system for this does the same thing with pathfinding around geometry but since it’s a static environment and all the paths are pre baked there’s basically no limit on quality as scale increases besides memory. I kinda feel like I’ll have to keep using my system since it’s pre baked and will run better because of that. Also means I can use multiple paths and blend to get the final diffracted audio position so there aren’t sudden drastic shifts with minimal camera positioning(as outlined in limits in the post)
I think this is really neat and will probably be well within performance budgets with the right scenes and control, but a toolkit to pre compute things for people who don’t need real time simulation seems like it’d make this way more accessible for a variety of devices and scenes
Please introduce a way to make it more diverse and usable, like with PlayerControls where you can easily change keybinds.
It’s getting tiresome to wait for a sound to be loaded in HumanoidRootPart just to destroy, like for example, when making a custom footsteps system, you’d have to destroy the default Running sound before doing anything otherwise it’ll overlap.
We do cache some information about the world, but we don’t want you to have to tag Parts as “static” vs. “dynamic” in order to get solid performance.
Best case scenario (for us) would be that none of the parts or terrain that are AudioCanCollide = true ever move, nor do any of the emitters or listeners – in that case, our caches can fill to the brim with useful info
I think we’d be open to adding opt-in precomputing, but we’d have to discuss!
This is awesome to see! I’d love if we could get an ‘under the hood’ look at how this works. I currently have a game with this type of audio using rays, and it would be helpful to see if it would be a good idea to switch over to this beta in the future.
Some specific questions I have:
Does this technology use raycasts? If so, is the acoustic simulation done listener → source or source → listener?
What opportunities will there be for configuring how it calculates sound? I imagine in interesting environments, like on an alien planet for example, sound would behave differently; or if your game has underwater sections, maybe not using terrain water, and you want the audio to react accordingly.
What are the current performance impacts when using high-fidelity sound simulation?
We have tested extensively in stone castles and marble caves. I’m slightly claustrophobic at this point.
One caveat is that there are distance limits. We don’t currently support echos from the grand canyon, but I would expect that if you’re using predefined materials, environments like the one shown, things should work pretty well. Let us know when you give it a shot!
And if you’re using custom materials, make sure you adjust the absorption of those materials!
Hey, could we get some settings to keep physics and sound settings separate?
I modify part physical properties to achieve an very specific effect with the physics engine and would not want that to affect the sound.
It’s extremely based to add acoustics without scripting. I haven’t touched the beta feature yet, but do visualizations exist for it or will they exist in the future? (much like light sources having visualization)
What about the option of us developers limiting who can play our game based on their specs? I want to be able to say, if you can’t run Future Lighting, then you can’t play my game. I feel developers should have that choice, that say.
It seems Roblox wants to “try all other options before turning the feature off” instead of just letting us, the developer, say we don’t want it off. Signals aren’t always the answer. I feel that Roblox doesn’t see that in really any implementation of its game engine features.
This is awesome. Sounds that adapt (are heard differently) based on where it is and the environment are immersive, and I think is an underappreciated aspect of games.
Will there be any other sound-related features in the future?
I agree, it’s tedious having to learn how to play sounds with the audio API thing. It’s pretty weird having to use multiple instances, wire them together… and apparently need to use a script to trigger :Play()?? It’s needlessly complicated.(i genuinely can’t find a tutorial that just shows how to play a single sound with no scripts). The Sound instance just being something you can plop in and slap an ID into worked just fine.
This actual update is pretty exciting though. Roblox’s audio quality was long overdue for a quality update; games always used to sound like a Roblox game in the same way that they used to look like a Roblox game, if that makes any sense.
Hey @C_Corpze – part of the reason we’re doing a studio-only beta first, is to collect feedback like this – we’re under no impression that the API is final, and we want to hear exactly what’s wrong with it – so that we have something dependable when it exits beta