After working with the new audio API, I had a few ideas for features. Adding these features would make it possible to create more dynamic and immersive environments.
Feature 1: Directional Sensitivity
It would be awesome to have a property that controls the directionality of sound for the AudioListener, similar to how different microphones capture sound. The same idea could also be added to AudioEmitter but in terms of direction of sound emittance.
While AudioInteractionGroup can sometimes provide a workaround, a built-in directional sensitivity feature would be more accurate, flexible, and easier for developers to implement dynamic environments.
Example
In a stage or presentation setting, a directional mic could prevent audience voices from being picked up, focusing only on the speaker in front of the mic. Unfortunately, listeners hear in all directions so we need a manual system.
The first technique that comes to mind is to make it so each audience user’s voice AudioEmitter has a different group than the stage mic AudioListener. This achieves a desired result but isn’t very flexible. Imagine we wanted it so that anyone can walk on stage and be picked up by the mic. They should be heard based on the direction and distance to the mic.
The current workaround would require tracking each player’s direction and position relative to the listener.
Additionally, simulating accurate distance attenuation would require a mic listener for each player/input. This is because DistanceAttenuation is only a property of AudioEmitter. Having it also for AudioListener would be useful.
(Highlighted Correction: Here
With a directional property, the AudioListener can be defined as most sensitive to hearing sounds in front of it. This will ignore audience voices while still being flexible enough to automatically allow anyone to walk on stage and have their voices heard accurately.
Visualization
Polar patterns are a great way to visualize what this property might allow for. Control over defining custom polar patterns would be amazing. At the very least an enum would also work for most usecases. Currently, both listener and emitters are omni-directional.
In Roblox, a visualization feature would be necessary to see how the directional sensitivity is oriented relative to the parent.
Feature 2: Parent-Independent Alignment
AudioListener and AudioEmitter need to be parented to anything with orientation. The “forward” direction for these audio instances are thus locked to the orientation of the parent. I propose more control over how they are oriented.
Example
Let’s say I make a top-down experience with the camera locked to some orientation. I want spatial audio so I put an AudioListener in the character. While facing some direction a sound is heard from the right. If my character does a 180deg turn that sound will now be heard from the left which contradicts how my camera is oriented.
To workaround this I would have to make some part or attachment that has to be updated per frame to match my character position, but not rotation. The AudioListener would be parented to this extra instance.
Rough Idea
Instead, a property could be introduced say “AlignmentMode” inspired by a similar property in constraints. If the AlignmentMode is “Parent” it will match the parent orientation like how it is currently. If the AlignmentMode is “Global” then the alignment could be locked to a specific global direction. There could also be modes for per-axis control.
If directional sensitivity is added for AudioEmitter, then this feature could also work for emitters.
Recap
- AudioListener needs a property for controlling sensitivity of hearing sound directionally.
- AudioEmitter needs a property for controlling sensitivity of emitting sound directionally.
- AudioListener needs a DistanceAttenuation property.
- AudioListener needs properties to control the alignment of hearing spatial emitters.
- AudioEmitter needs properties to control the alignment for emittance of sound if directional sensitivity is added for emitters.