Help with new Audio API

A while back, I shifted my games audio from sound instances to the new API.

This is all working fine and dandy, I don’t need help wiring them, however, my game has sound settings to alter the volumes of sounds.

Previously, my method of doing this was with SoundGroups, which to my knowledge do not work with the new Audio API.

I was thinking to just check when a descendant is added to workspace and use Attirubtes to update the sounds accordingly, but surely there’s a better solution, right?

ie, this something my code would resemble (Of course I’d save the default volumes in attributes blablabla)

workspace.DescendantAdded:Connect(function(desc)
    if desc:IsA("AudioPlayer") then
        desc.Volume *= player:GetAttribute(desc:GetAttribute("soundGroup"))
    end
end)

Is that what I’d have to do, or is there a better method?

You can use AudioFaders for this. If you create one for each “sound group” you have and wire your AudioPlayers to them, you should be able to adjust the volume of each by changing the Volume property on the fader.

Is this really the best solution the new API can offer? I’d have to create a new audio fader for every single gun in the game, whereas with AudioGroups I could just have ONE for every single gun and still have their sounds emit correctly.

Using a single AudioFader caused the viewmodel sounds to play on every other player’s gun, which would also mean that the sound of everyone else firing would also play from the viewmodel.

I don’t want to have to create a million AudioFaders just for one setting I had working fine on the old API to function properly, it’d be an absolute nightmare to manage.

Can you elaborate on why using a single AudioFader as a group doesn’t work? It should be exactly equivalent to using a SoundGroup. Under the hood, a SoundGroup is just

Sound ->
Sound -> SoundGroup -> SoundGroup -> ...
Sound ->

and an AudioFader should behave the same

AudioPlayer ->
AudioPlayer -> AudioFader -> AudioFader -> ...
AudioPlayer ->

If you’re emitting sounds spatially, it’s a bit trickier but it should still be possible currently. SoundGroups apply after spatialization occurs:

Sound -> (internal panning) ->
Sound -> (internal panning) -> SoundGroup -> SoundGroup -> ...
Sound -> (internal panning) ->

so to get equivalent behavior in the wiring API with 3D sounds, you’ll want to make sure that the AudioFader occurs after an AudioListener:

AudioPlayer -> AudioEmitter 
AudioPlayer -> AudioEmitter / AudioListener -> AudioFader -> AudioFader -> ...
AudioPlayer -> AudioEmitter

By default AudioListeners pick up all AudioEmitters, but you can use separate listeners with certain AudioInteractionGroups to group them together.

Let me know what problems you encounter - this “SoundGroup”-type workflow is one that should absolutely be possible with the audio API.

Well it works but holy is it confusing and oh do I yearn for an easier method.

I don’t think I could have ever found this out myself, the documentation just isn’t there, the only thing I read about interaction groups was a single sentence. The amount of hurdles to jump through to get the same behavior as one instance is just too much for most people to find out on their own imo. -w-

If I want to set an equalizer to create a muffled effect, do I have to set it for individual Player/Emitter? I’m trying to wire it after the AudioFader based on your wiring, but AudioEqualizer isn’t working

  • From the Sound
    AudioPlayer → Audio Emitter

  • From the Listener
    AudioListener → AudioFader (Group 1) → AudioEqualizer (Muffler) → AudioFader (Master)

The wiring you’re describing should work. What specifically is not working when you’re trying this?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.