Dynamic audio reverb / environmental acoustics sampler system

I’ve been hard at work for a few days implementing a system that allows sounds within the game to change their properties based on their physical surroundings. Previously I have designed a system like this for my previous game Vampire Hunters 3, however that one required a ton of manual labour to set up the reverb factor for each area of each in-game map and was not truly ‘dynamic’.

This new system makes use of the recently release Audio API - allowing for more intricate control of audio within Roblox. I’m aware that it was mentioned recently on the public API release thread that Roblox engineers already have plans on this as an official feature, but I can’t count on it being produced for at least another year.


The setup

I have a data set of materials to reference from that each have attributes that control how audio is affected by them, in terms of hardness/absorption and reflectance.

Each sound marked as ‘spatial’ will have this template fader wired into it and chained correctly:
image

Pseudo code flow chart of me trying to understand wiring & how this system would work

When the AudioEmitter source point is obscured from the camera behind a wall, it will then try and see how closely connected it is to the player using PathfindingService*

*Built-in Pathfinding is NOT a perfect solution for this, as it is built for entities that abide to gravity.

Therefore, obscured audio is unnaturally more muffled if the sound is behind a wall that is across an unreachable gap, or if the sound is below a player blocked by an open platform with no easy way up. I debated creating my own 3D-space pathfinder using a volumetric grid, but I have tupid and only got as far as creating an occupancy grid. Plus, any pathfinding algorithm I could implement would be far less optimised than the inbuilt pathfinding designed by professional programmers.

This system has to be refined and performance costs have to be taken into account, as each spatial sound adds AT LEAST 15 more instances (faders, effect chains, wires etc.) for what would have been ONE single Sound instance. Also, it uses a lot of raycasting, so I have set it up so that some sound effects have a ‘fast’ mode that uses less raycasts and no muffle pathfinding.

Despite these flaws, I am pleased with the resulting system. It adds a new depth of polish & immersion to my game world by making the player feel more immersed in their surroundings!


The demo

Demo video 1:
Material acoustics

Demo video 2:
Showing off how a streaming source of music is effected by its environment and relation to you

11 Likes

Looks (sounds) amazing!

Only 3 questions about it,

  • Would you ever make this open source/a plug n play module?
  • how many ms of frame time do these calculations eat up?
  • will you use parallel lua for raycasting optimizations?
1 Like

I’ll consider releasing something to the public once I’ve ironed things out, after this game goes public - I’ll likely think of a better solution by then. As for now I have just shared my workflow :slight_smile: It could be plug and play by listening for when a new Sound instance is inserted to the workspace, creating a duplicate with an AudioEmitter and silencing the original sound so that the new AudioEmitter can play whenever the sound is played.

In my testings I found that when an object is visible to the camera, each audio emitter takes roughly 0.25ms (0.00025s) to calculate its environmental acoustics on my i7 7700k* (alongside all of the many other RenderStepped calculations happening in my flavour of the game engine). So I could have a hundred of these and it theoretically wouldn’t impact much on performance in this better case scenario.

image

However, when the object becomes obscured from the camera and enters the ‘pathfinding’ mode, this increases to 50ms per calculation, with the first one or two calculations being much higher (presumably because the pathfinding algorithm does the heavy lifting first and then ‘caches’ pathing results?)

image

This is why I provided myself the ability to have a ‘fast’ calculation mode for certain sound effects that are less important/I will spawn a lot of, such as fire globules, reserving the more costly calculations to more important sounds such as explosions and player movement SFX.

I am merely a lowly technical designer rather than a professional programmer, so I haven’t conisdered parallel lua (in fact I didn’t know what that was until you mentioned it to me) - I’ll try and learn more about it!

3 Likes

I have a script that does a pretty similar job to this, but how did you calculate all the values for the reverb? Do you have a spreadsheet of some sort that I could use? and if not could you tell me how you learned?

I just have two data sets, one contains every material within the Roblox engine with a string link to look up a particular acoustic setting from the other set. They are made up of empty folders that hold several attributes. I should really transfer over to a purely code-based data set using a custom class or simple table, but this is how I started as it visually works best with how I think.

For each material I just made up the statistics based on what I can infer from personal experience and assumptions. Stone-based materials having high hardness, moderate absorption and high reflectance, making caves & alleyways echo quite strongly. Metals have very high hardness, very low absorption and very high reflectance - giving that signature metallic reverb effect.

On the other hand, softer materials with rougher surfaces such as grass, snow, sand and fabric have high absorption with low reflectance and hardness.

I’m curious, but what does the default folder in Acoustics do? What would it be used in?

It’s a fallback in case there is no material assigned / the link is broken on the raycasted part, and also for miscellaneous materials such as ForceField and Neon :grinning:

1 Like