New Audio API Features: Directional Audio, AudioLimiter and More

Hi Creators!

Today, we’re excited to close out the year with some more updates to the new Audio API:

  • AngleAttenuation (Beta)
  • AudioLimiter
  • AudioPlayer:GetWaveformAsync
  • AudioEcho.RampTime
  • SoundService.DefaultListenerLocation

A big thanks to everyone who is experimenting and giving us feedback on it - your contributions help drive our feature development!

AngleAttenuation (Beta)

As promised, we’re excited to deliver our new AngleAttenuation API, which controls how loudly a sound is emitted or heard based on direction, rather than distance. This takes the custom rolloff curves from our previous distance attenuation API and extends them to work for listening angles as well. It allows you to simulate highly directional sound sources, like loudspeakers, megaphones, and sonar, or directionally sensitive listeners, emulating realistic microphone patterns (e.g. shotgun, figure-8, etc), or any other custom shape you can imagine.

With this update, both AudioEmitter and AudioListener now include the methods GetAngleAttenuation() and SetAngleAttenuation() which allow you to adjust the volume of AudioEmitters or the sensitivity of AudioListeners depends upon the angle. You can use these tools to improve realism or invent new, innovative gameplay mechanics.

Similar to DistanceAttenuation, we’re providing a visual editor to help you easily create and adjust AngleAttenuation curves. Here’s a quick preview:

More detailed information can be found in our documentation:

AudioLimiter

We’ve heard your requests for a way to constrain audio stream volumes that’s stronger and more immediate than our AudioCompressor. For this, we’ve added AudioLimiter, a new instance that sets a hard volume cap on a sound. Unlike an AudioCompressor, the AudioLimiter responds instantly, ensuring an audio stream is always quieter than the set ‘MaxLevel’.

AudioPlayer:GetWaveformAsync

Have you ever wanted to visualize or preview the waveform of an audio asset before it plays? Now you can with the new GetWaveformAsync() method in AudioPlayer. This way, you can now access a visual representation of the waveform to enhance your audio tools and visualizations.

AudioEcho.RampTime

We’re also enhancing AudioEcho with a new RampTime property. AudioEcho is now an interpolating delay line, meaning that when RampTime is greater than 0, any changes to AudioEcho.DelayTime or AudioEcho.Feedback are smoothly updated over a specified number of seconds instead of instantaneously changing.

This means that adjusting DelayTime will temporarily and cleanly alter pitch instead of creating unpleasant crackling. You can use this to simulate acoustic travel time with just a few lines of code:

Sample code for acoustic travel time

local RunService = game:GetService(“RunService”)

local echo = script.Parent

echo.RampTime = 0.1

RunService.Heartbeat:Connect(function()
echo.DelayTime = script:GetAttribute(“Distance”) / script:GetAttribute(“SpeedOfSound”)
end)

And if RampTime is larger than 0, this comes with the doppler effect for free:

SoundService.DefaultListenerLocation

Finally, SoundService now includes a DefaultListenerLocation property. This property automatically spawns an AudioListener and attaches it to the camera or the player’s head, without the need for additional scripting. This change should make your sound design workflow a bit easier!

We’re thrilled to wrap up 2024 with these exciting new features and are eager to bring you even more improvements early in the new year! A huge thanks to @Doctor_Sonar, @cognitivetest_306, @ReallyLongArms, and @therealmotorbikematt for their ongoing support in gathering feedback and helping make audio creation on Roblox easier and more accessible. Keep sharing your feedback and demos; we’re inspired by hearing what you create!

Sincerely,

The Roblox Audio Team

206 Likes

This topic was automatically opened after 10 minutes.

Because nobody will probably see my comment on the original Audio API post, i’ll copy paste my message here:

There is currently a limit when playing more than ~300 audio sources (regardless the number of audio emitters) which is really a bottleneck for my system. Adding more than 300 audio sources linked to emitters lead to some of them not playing at all.

I’ve tested in a number of ways, but for my application i use around 15-20 sounds per source emitted trough 6 speakers (each unit). The volume and other properties do get applied to the sources but the sound is not coming off from sources added after the ~300 ish limit i suppose.

Edit: For context all the emitters are not modulated at all, and there is only a reverb process done after the general audio is captured by the audio listner.

42 Likes

in WHAT application do you need 300 active audio sources bro :sob::sob:

55 Likes

Train engine simulator :joy:. Each train modulates 15-25 sounds in real time

25 Likes

Any updates on if we will ever be able to spectrum analyze voice chat inputs?

12 Likes

would be cool if we could automate EQ with AngleAttenuation, could do pseudo HRTF effect with that

7 Likes

this is awesome stuff that you would never usually see with ROBLOX. keep it up!

8 Likes

Hey @cellork1; I replied to your original comment here

8 Likes

Is there any plans for a instance that generates a specified frequency? This would be useful for generating procedural engine or other forms of complex sounds without needing multiple different audioemiiters.

9 Likes

if it’s really needed, just handle all sounds on the client and only create & play instances that are within a specific radius of the player OR combine sounds that don’t need to be handled on their own. most game engines cap out at 32 sounds at once. the fact roblox is managing 300 alone should actually be complemented :sob:

18 Likes

I LOVE THE AUDIO API, these new features will be awesome for a lot of games, thank you roblox

8 Likes

Hey there,
Getting direct access to voice spectra isn’t on the roadmap. However, there are a number of associated functions that we do have on our Creator Roadmap, including Speech to Text.

4 Likes

Hey @gizzygazzy360; I think you can already sort of do this, but it involves some math and scripting; there’s a similar code snippet in the AudioEqualizer docs

If you can think of any APIs that would make this more straightforward, we’re all ears!

4 Likes

yea I’m aware this is possible with scripting (I’ve done something similar already), I bring it up because I’d imagine a built in solution would be more performant.

4 Likes

This is so cool!
Can we please also get an API for audio synthesis though?

I’ve always wanted to generate waveforms and make synthesizers in Roblox, it would be such a cool thing and is currently not really doable.

We got editable images and meshes, can sound be next please? :pray:

11 Likes

Now that’s very useful! Hopefully hoping for the equivalent of soundgroups but for the new audio API (or if there’s already one, please tell me, I wanna make a sound slider setting :sob:)

2 Likes

Would there be plans to reintroduce Volumetric Audio feature in the future?
This is really cool though, since there can be many use cases for angle attenuation.

2 Likes

Hey - SoundGroups are essentially just a chain of audio effects. You can accomplish the same things in the new API by wiring audio instances into each other one at a time.

For a sound slider setting, you can use an AudioFader, which has a Volume property. You can set up a chain like:

AudioPlayer -> various effects -> AudioFader -> AudioDeviceOutput
                                  ^
                                  your slider controls this

where each -> is a Wire instance.

Hope that helps!

6 Likes

I would love to see a sister method to SoundService:PlayLocalSound() that would take these new features and put them together to play a 3D sound without having to create a new sound instance each time, making sound replication a more performant and easier to do

3 Likes