A while back, I shifted my games audio from sound instances to the new API.
This is all working fine and dandy, I don’t need help wiring them, however, my game has sound settings to alter the volumes of sounds.
Previously, my method of doing this was with SoundGroups, which to my knowledge do not work with the new Audio API.
I was thinking to just check when a descendant is added to workspace and use Attirubtes to update the sounds accordingly, but surely there’s a better solution, right?
ie, this something my code would resemble (Of course I’d save the default volumes in attributes blablabla)
workspace.DescendantAdded:Connect(function(desc)
if desc:IsA("AudioPlayer") then
desc.Volume *= player:GetAttribute(desc:GetAttribute("soundGroup"))
end
end)
Is that what I’d have to do, or is there a better method?
You can use AudioFaders for this. If you create one for each “sound group” you have and wire your AudioPlayers to them, you should be able to adjust the volume of each by changing the Volume property on the fader.
Is this really the best solution the new API can offer? I’d have to create a new audio fader for every single gun in the game, whereas with AudioGroups I could just have ONE for every single gun and still have their sounds emit correctly.
Using a single AudioFader caused the viewmodel sounds to play on every other player’s gun, which would also mean that the sound of everyone else firing would also play from the viewmodel.
I don’t want to have to create a million AudioFaders just for one setting I had working fine on the old API to function properly, it’d be an absolute nightmare to manage.
Can you elaborate on why using a single AudioFader as a group doesn’t work? It should be exactly equivalent to using a SoundGroup. Under the hood, a SoundGroup is just
Well it works but holy is it confusing and oh do I yearn for an easier method.
I don’t think I could have ever found this out myself, the documentation just isn’t there, the only thing I read about interaction groups was a single sentence. The amount of hurdles to jump through to get the same behavior as one instance is just too much for most people to find out on their own imo. -w-
If I want to set an equalizer to create a muffled effect, do I have to set it for individual Player/Emitter? I’m trying to wire it after the AudioFader based on your wiring, but AudioEqualizer isn’t working
From the Sound
AudioPlayer → Audio Emitter
From the Listener
AudioListener → AudioFader (Group 1) → AudioEqualizer (Muffler) → AudioFader (Master)