Roblox Audio API Exits Beta: Enhanced Sound Controls Now Available

I know this isn’t too relevant, but if anyone happens to know the soundtrack in the demo videos please let me know.

@IdontPlayz343 in addition to what @DaDude_89 mentioned, I’d say the main scenarios where the new API is required involve branching signal flow

In the Sound/SoundEffect/SoundGroup API,

  1. you can assign Sound.SoundGroup to one SoundGroup – picking a different one re-routes the sound altogether
  2. you can parent a SoundGroup to one other SoundGroup – picking a different one re-routes the group
  3. SoundEffects can be parented to Sounds or SoundGroups; if there are multiple SoundEffect children, their SoundEffect.Priority property determines the order they are applied in sequence (one after another)

In the new API, all connections use Wires, instead of parent/child relationships or reference properties – this supports both many-to-one (like SoundGroups) and one-to-many connections – so it also means that effects don’t necessarily need to be applied in-sequence; you could set up something like

               - AudioReverb -
              /               \
AudioPlayer -                  - AudioFader
              \               /
               - AudioFilter -

(sorry about the ascii art :sweat_smile: ) to apply effects in parallel to one another

Additionally, the new API supports microphone-input via AudioDeviceInput, so you can use it to control voice chat in all the same ways as audio files!

I know this isn’t too relevant, but if anyone happens to know the soundtrack in the demo videos please let me know.

@iihelloboy this was made by @YPT300 :grin:

4 Likes

:0

4 Likes

Will occlusion and what not be added to the sound/soundgroup instances? or will they be exclusive to this new API?

3 Likes

Whoops! Will get that fixed. Thanks for notifying

4 Likes

Sound Design was always something ROBLOX was really lacking at… this will surely help on that problem

1 Like

I’m glad that this is finally out of beta, I enjoyed seeing the new features as they were being added. I hope eventually we get more of these extremely customizable in-depth apis.

On a side note, is anyone else unable to see the last 2 demo videos?

1 Like

Can someone explain the difference between “AudioFilter” and the normal EQ?

1 Like

Can sound/voicechat Memory get added to Stats | Documentation - Roblox Creator Hub, so that it can be called from GetMemoryUsageMbForTag from Enum.DeveloperMemoryTag?

I know you can get Sounds as a whole but to my knowledge this would include other Sound types like Sound Effects as well.

For my game for example, I’d like to know how much Voice Chat lag is present since sound/voicechat memory can have a huge impact and I’d like to be able to share that “VC Lag” on screen with Players so they know why they might be lagging.

2 Likes

AudioFilter is just one hand and has different options. Take a peek at the videos or better yet, play with it in Studio :wink:

1 Like

my producer brain is going CRAZY

Anything that makes sound design more dynamic is a win for me, especially when current methods don’t have to be replaced with new ones (always dislike when that happens).

I might have missed something though, are directional sounds now possible easily without having to do a ton of obnoxious local scripting? Like a sound emitter/instance that plays sound, like a speaker, aimed in a specific direction (thinking, having one audio for the front of a jet engine and one for the rear, or a trumpet/horn blasting a certain way). I know it has been suggested before, but I have honestly lost track on most of this update and am unsure if it got implemented.

Not sure if this is an Audio API bug or something else, but sometimes on initial join, everyone is muted (for you) by default even when they’re talking, and the only way to fix this is to Esc then manually Mute then Unmute everyone.

The Green and Red bars indicate them talking, and the video showcases how I need to Mute and Unmute to hear voices.

I might have missed something though, are directional sounds now possible easily without having to do a ton of obnoxious local scripting? Like a sound emitter/instance that plays sound, like a speaker, aimed in a specific direction (thinking, having one audio for the front of a jet engine and one for the rear, or a trumpet/horn blasting a certain way). I know it has been suggested before, but I have honestly lost track on most of this update and am unsure if it got implemented.

Hey @DieselElevatorsRBLX; this is being worked on currently – we’re aiming to add a directional-attenuation API (and accompanying visual-editor) that lets you specify how loud each AudioEmitter or AudioListener emits/hears in each direction.

sometimes on initial join, everyone is muted (for you) by default even when they’re talking, and the only way to fix this is to Esc then manually Mute then Unmute everyone.

Hey @Nogora22; we’re aware of this issue and working on a fix – sorry for the inconvenience.

As a very temporary workaround, you might be able to work around this by running a localscript that loops through all AudioDeviceInputs, and flips their .Muted property back and forth – but this shouldn’t be necessary for long.

5 Likes

this is bugged if you put the M on the highest , H on the mid level and L on the lowest and drag it to the right it breaks the editor completely forcing me to restart studio for it to work again
here is a video of me doing the bug


(i didn’t have permission to post on the bug reports channel so i put it here instead)

Hey @nikos300; @cognitivetest_306 is working on a fix for this that should be coming soon

1 Like

Any update on this implementation :sweat_smile:

Two things I’ve really wanted since day one of this audio overhaul: Realistic sound travel time, and some sort of AudioDelay component

Inspiration

In real life, sound is made up of pressure waves, and pressure waves take time to propagate through the air, usually around 345m/s (~770mph). This means sound information takes a considerable time to travel from A to B, unlike light (which, in comparison to sound, might as well be instantaneous). This is how the beloved Doppler effect exists.

I’ve been playing a certain stormchaser title recently, and while the sound design can be really good at times, one glaring issue I’ve noticed is the lack of sound physics. When lightning strikes, thunder is heard immediately afterwards. This makes thunderstorms feel tiny and insubstantial.
While you can achieve this with scripting, it requires you to develop your own custom sound system. It’s prone to jank. Rightfully, most people would rather focus on something else instead.

I’m proposing two different components: AudioDelay and AudioPhysics

AudioDelay would be a dead simple component that delays an audio signal by a fixed time. Simple, to the point.
Currently you can achieve this with an AudioEcho, disabling DryLevel and Feedback and using DelayTime and WetLevel, but this causes a bunch of weirdness, like ear-piercing crackling and popping… clearly not the intended use case for this component.

AudioPhysics would be a specialized AudioDelay that specifically simulates sound propagation delay, with an adjustable speed of sound. This would have the added benefit of being directly tied to the physics/sound engine, and being more efficient than any Luau-based solution.


As always, absolutely loving what the Audio Team’s been cooking up so far. Audio tech in Roblox (and video games in general…) were always kinda lackluster. Huge props to everyone involved.

If you need this to be a dedicated feature request, just ask. This just seemed like the most appropriate place.

5 Likes

Hey @ee0w; AudioEcho is not an interpolating delay-line, so you don’t get the nice pitch changes if you wanted to use it to simulate doppler; that said, it definitely shouldn’t be producing ear-piercing crackling noises – can you submit a bug report for that?

I can definitely see value in adding an interpolating delay, along with an easy way to configure it from distance & speed-of-sound. Thanks for the feedback!

5 Likes

Hey @ee0w, we added an AudioEcho.RampTime property that can be used to smoothly interpolate changes to the DelayTime. Currently this is live in studio, so you can play around with it – we plan to push it out to clients soon, but want to make some performance improvements before it gets enabled everywhere.

Be on the lookout for an announcement :eyes:

Edit: announced here

3 Likes