I’m not sure if this is the right category but what is the SoundGroup replacement for the new Audio API?
I don’t know a lot about the new Audio API, but I think AudioInteractionGroup
?
Hi, the SoundGroup
instance doesn’t have a direct analogue in the Audio API. Instead, any effects that you place under a SoundGroup
have analogous instances that you can string together using Wire
s.
For example, if you had a SoundGroup
containing a ReverbSoundEffect
followed by a CompressorSoundEffect
, you can replace that with an AudioReverb
, an AudioCompressor
, and three Wire
s, like so:
Before
Sound -> ReverbSoundEffect -> CompressorSoundEffect
After
Wire Wire Wire
| | |
V V V
AudioPlayer -> AudioReverb -> AudioCompressor -> AudioDeviceOutput
Let me know if that helps clear things up!
There should likely be an equivalent to make the task easier. Most of us don’t really need the additional flexibility using wires gives us and it adds complexity and a learning curve to convert old Sound + SoundGroup to the Audio API
We should have a SoundGroup (or AudioGroup) for those of us who just uses them for basic stuff (like in my case, its audio settings and allowing client to set different volume for each type). I’m sure its 100% doable because iirc SoundGroup already works like the Wires but internally
If the only thing you need from a SoundGroup is the volume (for audio settings), you can use AudioFader
s. For example, if you have this sort of setup:
Before
Sound -> MusicSoundGroup -> MasterSoundGroup
You could do this in the Audio API:
After
Wire Wire Wire
| | |
V V V
AudioPlayer -> MusicAudioFader -> MasterAudioFader -> AudioDeviceOutput
I do agree that the new equivalent adds some additional complexity. We’re looking for ways to ease this process while maintaining the system’s flexibility, but if you have thoughts let us know!
Hey! Thanks for your answer, it will definitely help transition over, but I have another question. For sounds that are dynamically played (for example voices resulting from interactions), how would that work script-wise?
Currently in my game, the Sound
s are Cloned, Parented to Character’s Head, Played then Removed from a Server script.
From my understanding of the new setup, if I am to play these sounds, I’d need to clone the AudioPlayer* and AudioDeviceOutput, in order to play a singular sound, which seems extremely… inefficient
* Because more than 1 player can play the sound at the same time
From writing this I just feel like completely ignoring the new structure and dynamically create audioPlayers/devices from sounds using a module seems like the easiest onboarding way for existing experiences (or at least my case), what I mean:
local function PlayAudioFromSound(Sound: Sound, Parent: Instance)
local AudioPlayer = Instance.new("AudioPlayer")
AudioPlayer.Asset = Sound.SoundId
local Output = Instance.new("AudioDeviceOutput")
Output.Parent = AudioPlayer
local OutputWire = Instance.new("Wire")
OutputWire.SourceInstance = AudioPlayer
OutputWire.TargetInstance = Output
OutputWire.Parent = AudioPlayer
local SoundGroup = Sound.SoundGroup
if SoundGroup then
AudioPlayer.Volume *= SoundGroup.Volume
end
AudioPlayer.Parent = Parent
AudioPlayer:Play()
AudioPlayer.Ended:Once(function()
AudioPlayer:Destroy()
end)
end
IN MY OPINION:
- What brings the complexity is the flexibility of the new system. Flexibility which might not be needed by everyone.
- AudioDeviceOutput are redundant (Why do they exist? Why can’t the AudioPlayer play alone, no wiring, no nothing)
On another note, offering a plugin for converting existing experiences (Mine goes back to 2017!!) that uses the old SoundApi to the new AudioApi would be a godsent, although my previous concerns remains
And I just realized how long my post actually became thanks for reading my tedtalk on the new audio api
Hi, thanks for the input! Understood about where you feel like the complexity in the system comes from, and your hope to see a legacy Sound API → Audio API conversion tool.
To answer your questions:
For sounds that are dynamically played (for example voices resulting from interactions), how would that work script-wise?
The way you mentioned (cloning an AudioPlayer and its AudioDeviceOutput) would work! You can also hold onto one global AudioDeviceOutput and just clone the AudioPlayer and it’s Wire - if you keep the Wire childed to the AudioPlayer, it could look like
local newPlayer = audioPlayerWithWireUnderIt:Clone()
newPlayer.Parent = desiredParent
newPlayer.Asset = desiredAssetId
newPlayer:Play()
Why does AudioDeviceOutput exist?
The system is designed not to make any assumptions about what audio should be routed where. In the simplest case, you may want all AudioPlayers to be sent to the speakers directly, but sometimes you may want audio to be emitted spatially from an AudioEmitter in 3D space, or analyzed by an AudioAnalyzer to produce a visualizer, for example. Basically the design chooses not to make “output this audio to the speakers” a special case among those possibilities, but that makes it a bit harder to set up the simplest scenario.
That answer doesn’t solve your problem, but I hope it helps with understanding!
Hello! Thanks again for the answer.
The output can’t (without requiring lots of changes) be static, but I think the closest to Sound behaviour would be to have one as a child of each AudioPlayer and pre-wired up through the fader(s) like you mentioned in your other post
I’ll play with all that, thanks for your help