Is it possible to make audio emitters emit their sound with X delay time?
For example, lets say i use a single AudioPlayer:Play(), with the audio player connected to different emitters
@WoloPoints You could wire the AudioPlayer
to several AudioEcho
s, one before each AudioEmitter
In the old system I just have to create a master sound group, then a child sound group for the gun sound effects. Now the gun SFX group can have its volume set individually while also adhering to the master volume because of how it multiplies all the volumes together.
What am I supposed to do with the new API? Create a listener entirely just for gun sound effects that then connects to a fader for the gun sound effect volume, then that fader to the master fader which then finally connects to the AudioDeviceOutput? That’s is so unnecessarily cumbersome.
@Fezezen yes you’d create an AudioListener
to hear the gunshots, then apply volume/effects adjustments after the listener. It’s definitely more instances, but this is what Sound
+ SoundGroup
were doing under the hood, hiding the details.
The big thing this buys you is the ability to have branching signal flow. In the Sound API, many sounds can route to one SoundGroup – but you couldn’t have one Sound go to several parallel SoundGroups.
On another unrelated note, we’re struggling with the setup here because of the way the emitter is the source of the sound at the end of the pipeline. With Fmod Studio, you can (at least conceptually) emit lots of sounds in different locations - such as with the scatterer instrument - with a single chain of effects/mixers. The spatialisation is then applied as the last step, after the effects have been applied (in principle).
With this the way your system has been implemented, if we wanted to scatter sounds around with different positions and all sharing the same effect chain, we have to duplicate that whole chain for each sound we play. This feels sub-optimal.
@Ed_Win5000 you could give the AudioEmitter
s a particular AudioInteractionGroup
– this constrains which listeners are capable of hearing them. Then, if you make a single AudioListener
with the same interaction group, and connect that to the reverb, you don’t have to do the reverb pre-emission
i can accept the way it’s currently being structured but some of my sounds had this crack which is weird personally
@pankii_kust we found a bug that causes freshly-created emitters/listeners to “click” when they are using a custom distance or angle attenuation curve – that should be patched soon