New Audio API [Beta]: Elevate Sound and Voice in Your Experiences

You have to change AudioListenerAvailable too, AudioListenerOwn is specifically to pick up the players own microphone
Its split up like that so you can hear your own voice through the intercom without being able to hear your own voice through your character

I forgot the specifics on how audiolistenersend worked exactly but im pretty sure that AudioListenerAvailable, AudioListenerOwn, and IntercomSoundEffects are all fed into that audio emiter, which is then fed into send, which is then fed into GlobalIntercom
Disabling the “MuteSelf” local script should help in debugging, cause disabling that makes your voice appear exactly as it appears to everyone else

Ive kinda forgotten a lot of the specifics on how it works cause i made the intercom the night the api came out, and havent really touched it since, sorry

If you got any questions feel free to ask me
Hope this helps

@ReallyLongArms Hello! I’m working on large update utilizing the AudioAPI, is there any word on when the AudioChannel Splitter will be released publicly? We’re pushing the limits of audio quality and it’s an essential tool for creating our own “Volumetric” audio or basically 3D audio system with the new API. The new API has so many awesome tools so I’m excited to see further updates.
Cheers,

  • Panzerv1

Hey @panzerv1AudioChannelSplitter and AudioChannelMixer should be live now – please let me know if you run into any issues using them!

Oh awesome! I’ll report back if I find anything wrong. Excited to see what I can do with them!

Is it possible to get the audio frequency out of the analyzer at some point?

2 Likes

When I try to use an AudioDeviceInput, and I set it to my player and I set it to allow, why does “isReady” become false? When it’s set to deny it becomes true

The Allow/Deny determines how the engine interprets :SetUserIdAccessList – which is empty by default

An empty allow-list doesn’t allow anybody to hear this AudioDeviceInput, whereas an empty deny-list allows everybody to hear this AudioDeviceInput

So if you set AccessType to Allow but don’t add anybody to the allow-list, it’s effectively muted

So, if I added my ID to the access list, then I set it up to connect an AudioEmitter to an AudioDeviceInput and I selected my player as allow, will IsReady become true and will the AudioEmitter play whatever my microphone/AudioDeviceInput is picking up?

Hi there, I’m trying to follow the exact steps shown here, but I can’t connect a wire from a listener to a fader. Did something change regarding to how wires work? Thanks!
[ GIF for context ]

nevermind i just didnt realize the audio fader’s wire was already connected to the audio listener
moral of the story: dont stay up late :pray:

1 Like

Are there any plans to increase the 50 server size limit any time soon? This is holding us back.

Any way to send Wire audio data to the experience’s SoundService? Would help with globally effective settings like ambient reverb.

Is there a plugin or code snippet available to convert existing Sounds/Effects/SoundGroups to this new structure?

This is a ton of boilerplate code to rewrite if I want to accomplish this since my SoundGroups are created at runtime. One Sound instance and 1 SoundGroup now need several wires and other instances to work. SoundGroups were nice for managing all of this.

It’s also weird that the property names are slightly changed so if I want to refactor my code I have to change a bunch of property names despite them being the same thing? (Playing → IsPlaying, Looped → Looping, SoundId → AssetId)

Is it possible to make audio emitters emit their sound with X delay time?

For example, lets say i use a single AudioPlayer:Play(), with the audio player connected to different emitters. Is there a way to make every emitter play the sound with some delay? I need it for a muskets firing thing

The blooming shimmering OS!!! (windows 11)

Someone really ought to make a module/library that abstracts all of these connections and instances into something more intuitive. I have a lot of trouble trying to wrap my head around how this all works and keeping track of all the wires/connections to finally output a sound in the way I intend. A plugin to visualize it may help, but even then that’d get confusing with the amount of connections needed to achieve what could be so simply done with Sound, SoundGroup and SoundEffects.

1 Like

Say I want individual volume sliders for various kinds of sounds in my game. Like if I wanted gun shots to have their own volume slider, but there is also a master volume slider that controls the volumes of all sounds.
In the old system I just have to create a master sound group, then a child sound group for the gun sound effects. Now the gun SFX group can have its volume set individually while also adhering to the master volume because of how it multiplies all the volumes together.
What am I supposed to do with the new API? Create a listener entirely just for gun sound effects that then connects to a fader for the gun sound effect volume, then that fader to the master fader which then finally connects to the AudioDeviceOutput? That’s is so unnecessarily cumbersome.

Hi @ReallyLongArms - I’m still not quite convinced on having a lot of emitters for a single large object - but we’ll give that a go when we get back to it. Moving the emitter around within the bounds to follow the players feels more feasible.

On another unrelated note, we’re struggling with the setup here because of the way the emitter is the source of the sound at the end of the pipeline. With Fmod Studio, you can (at least conceptually) emit lots of sounds in different locations - such as with the scatterer instrument - with a single chain of effects/mixers. The spatialisation is then applied as the last step, after the effects have been applied (in principle).

With this the way your system has been implemented, if we wanted to scatter sounds around with different positions and all sharing the same effect chain, we have to duplicate that whole chain for each sound we play. This feels sub-optimal.

I don’t know how Fmod handles this under the hood, but I imagine some resources can be shared by not having many instances of each effect.

@ReallyLongArms any plans to allow spectrums from voice chat? just having peak and rms levels is very limiting, and if i’m not wrong, default voice chat with dynamic heads include lip syncing, which requires spectrums. was this a privacy reason?

soundgroups need to get added back to the new api

i can accept the way it’s currently being structured but some of my sounds had this crack which is weird personally, if only i could know where i can fix this which plain Sound object already does this, i lov that it fixes my sound being awfully played with the old ones

for those who planned to use this new audio api, ive made a comparison between these two

That’s what AudioFader is made for