A simple guide to the Audio API

The Audio API

A simple guide for it, the docs aren’t very good at this.

Basics

The new Audio API is based around wires and a tree diagram, it follows the basic wiring graph setup:

To be heard by a client, it must route to a AudioDeviceOutput. AudioDeviceOuput can be placed in the Workspace by the server, and will be heard by all players, unless filtered with the Player property.

Creating a 3D sound

To play a sound in the workspace, you use the AudioEmitter class. This is an output source, so to have it be heard, you should use an AudioListener, which can be placed in any source that has a 3D World CFrame (BaseParts, the Camera and Attachments.). Emitters have attenuation built into them which allows you to define how it should rolloff the further a player gets away from it.

Audio can directly route to an AudioDeviceOutput and skip the Emitter/Listener step if you’d rather have a global audio sound.

The Listener should then be routed to the AudioDeviceOutput to be heard bya player.

AudioInteractionGroups

AudioInteractionGroup is a property on AudioEmitter and AudioListener that controls what hears what, it can be useful for filtering listeners to a specific set of emitters if needed.

Voice Chat

The new Audio API can modify Voice Chat data, to do this, use the AudioDeviceInput object. This is an input source that can be wired through effects and to an output as needed.

Here’s a wiring graph that applies a equalization to incoming VoiceChat data and routes it to an emitter, its very simple!

I’m unsure what happens regarding if a client isn’t permitted to use voice chat, if I route an AudioDeviceInput to an AudioDeviceOutput directly, or through effects, on the server. Will clients who cant hear the voice chat data hear it or not? What if an AudioPlayer or Listener adds extra data to the source here?

Effects

Effects are the main powerhorse of the new Audio API, they modify data between the input and output sources. For example, applying extra reverb or changing the volume of a sound.

2 Likes

Great guide!

Something that I always wanted to know is if AudioDeviceInput can detect if the player is speaking in-game… I see that there is an experience called “Voice Control” that manages to get it. I tried checking out “Q&As” from #help-and-feedback:scripting-support and it led to this API

2 Likes

If possible, can I create an audio (Voice Chat API) system like a radio, where only people on a specific team can hear it, no matter how far away they are? (most likely will make it depending on the data strength it will cut out a little)

You’ll need to set VoiceChatService.UseAudioApi to Enabled and then create an AudioDeviceInput and assigin the Player argument to said player.

The VoiceChatService creates its own AudioDeviceInput in the player for built-in voice chat.

1 Like