[Update] February 28, 2024
Hello Creators!
We’re excited to unveil the beta release of our new Audio API, which includes highly anticipated controls over sound and voice that @Doctor_Sonar previewed at RDC!
Have you ever wanted to emit a single sound from various 3D locations simultaneously? What about altering players’ voices? How about implementing team chat or creating a functional walkie-talkie? Our new API finally empowers you with the control over sound and voice chat that you’ve been asking for — and much more!
To make this level of control possible, we’re introducing many new instances. But conceptually, these instances fall into three categories: some that produce audio streams, some that consume audio streams, and some — such as effects — that do both. Wire
s connect these up to form a processing graph.
The new API’s modular design marks a departure from existing Sound
, SoundGroup
, and SoundEffect
instances. While the older APIs will remain active, the new API enables a suite of features that were previously impossible, like treating voice the same as any other audio source. This beta introduces new ways of thinking about sound design in Roblox. Some features may be familiar, but others — like the spatial simulation of voice audio and how it’s wired– may require some experimentation and getting used to.
Please read this post thoroughly before making changes to your existing experiences. Your feedback during this beta phase is critical as we continue to improve the API and make sound design even more powerful for developers.
To get started, follow the instructions below to learn how this API works and what has changed, then download this Audio API tutorial placefile to experiment with how your ideas may sound in practice. We are eager to see what you’ll build and are certain your work will be much cooler than any of the examples we’ve put together!
How to Use the Audio API
Getting Started
Before we begin, you must enable the “New Audio API” Beta Feature in Studio; this makes the new instances browsable and insertable.
The existing Sound
instance functions as a file-player; however, if parented to a Part
or Attachment
, it will also behave as a 3D audio emitter that can be heard by your listener. Previously, to play the same sound from multiple 3D locations simultaneously, your only option was to duplicate the playback. Today, Wire
s make it possible to work with these three logically distinct components separately.
Let’s walk through an example of setting up a Public Address system.
Step 1: Create Some Emitters
Create some Parts or Models to act as speakers. Under each one, create an AudioEmitter
instance. For organization, we’ve opted to put all of these in a folder.
AudioEmitters
are points in 3D space that broadcast sound into the world.
Step 2: Create Some Wires
Create one Wire
per speaker, and set their TargetInstance
property to the corresponding AudioEmitter
. Wire
s can be anywhere in the DataModel, but we’ll add them as children of the AudioEmitter
s for organization.
Step 3: Create an AudioPlayer
Create an AudioPlayer
instance, and set the SourceInstance
property of each Wire
to it.
AudioPlayer
s can be anywhere in the DataModel, but for organization, we’ll add it to the same folder as our Speaker
s.
AudioPlayer
s can load and play audio assets. Assign the AssetId
property of the AudioPlayer
to a suitable audio asset, and notice that its TimeLength
and IsReady
properties get filled in.
Step 4: Create a Listener
Create an AudioListener
; we’ll parent it to another part. Then create an AudioDeviceOutput
– this can live anywhere, but we’ll parent it to the AudioListener
.
Finally, create a Wire
that connects the AudioListener
to the AudioDeviceOutput
, by setting its SourceInstance
property to the listener, and TargetInstance
to the device-output. AudioListener
s pick up their surroundings and record a signal that can be wired to other nodes. To render what the listener heard, we wired it directly to an AudioDeviceOutput
.
Step 5: Try it out!
Add a script along the lines of:
local audioPlayer = script.Parent
audioPlayer.Looping = true
audioPlayer:Play()
As a child under the AudioPlayer
– then press Play.
You should hear the same asset played from each speaker in sync – if you move the part hosting the AudioListener
, it will be spatialized differently according to the part’s CFrame
.
New Instances
In the walkthrough above, we demonstrated using AudioPlayer
, AudioEmitter
, AudioListener
, and AudioDeviceOutput
. These are just four of the instances that can be connected with Wire
s. The full list includes 15 instances:
AudioAnalyzer
AudioChorus
AudioCompressor
AudioDeviceInput
AudioDeviceOutput
AudioDistortion
AudioEcho
AudioEmitter
AudioEqualizer
AudioFader
AudioFlanger
AudioListener
AudioPitchShifter
AudioPlayer
AudioReverb
Example Use Cases
Working with Voice Input
To control voice with the beta Audio API, you’ll need to specifically enable it by navigating to Model > Advanced > Service, and inserting the VoiceChatService
.
Doing so, you’ll notice two properties under VoiceChatService
: UseAudioApi
, and EnableDefaultVoice
. UseAudioApi
has three possible values: Automatic
, Disabled
, and Enabled
. In this phase of the beta API, Automatic
equals Disabled
; we do this to prevent unintentional changes to your existing experiences.
Setting UseAudioApi
to Enabled
creates new opportunities for routing voices by allowing the use of AudioDeviceInput
s. If EnableDefaultVoice
is enabled, VoiceChatServices creates an AudioDeviceInput
parented to each player, an AudioEmitter
parented to each character model, and wires them together. Additionally, an AudioListener
is parented to Workspace.CurrentCamera
.
AudioDeviceInput
s have a Player
property, and can be scripted to determine whether they are Muted.
See this example below:
Sample code for Muting a Player
local function setMuted(player: Player, shouldBeMuted: boolean) : nil
local input: AudioDeviceInput? = player:FindFirstChild(“AudioDeviceInput”)
if input then
input.Muted = shouldBeMuted
end
end
You can rewire them, adjust their properties, add effects, and more! If you don’t want any default voice behavior to be provided for you, the VoiceChatService.EnableDefaultVoice
property can be turned off. As long as UseAudioApi
is still set to Enabled
, the behavior is entirely up to you.
In the fourth example in our Audio API tutorial, we used AudioDeviceInput
s to implement chat via handheld radios:
Please note: In the new Audio API, AudioEmitter
s mimic real-world distance attenuation, meaning the volume fades as an AudioListener
moves away from each emitter but doesn’t completely decay to zero. Previously, player voices were completely inaudible from more than 80 studs away. This means that if you opt-in to the new APIs and make no other changes, your experience will sound different: voices will be audible from further away. We strongly suggest experimenting and testing with multiple voice users at different distances to achieve your desired results. For greater control over audio rolloff we recommend using AudioEqualizer
s; you can find sample code in our API doc to get you started.
If you are interested in recapturing exactly the same behavior as before, you can use the sample code below:
Sample code for Recreating Older voice rolloff
local function wireUp(source: Instance, target: Instance) : Wire
local wire = Instance.new("Wire")
wire.SourceInstance = source
wire.TargetInstance = target
wire.Parent = target
return wire
end
local function split(wire: Wire, effect: Instance)
local source = wire.SourceInstance
local target = wire.TargetInstance
wire.TargetInstance = effect
wireUp(effect, target)
end
local function onWireConnected(wire: Wire)
local fader = Instance.new("AudioFader")
fader.Parent = wire
split(wire, fader)
end
local function onWireAdded(wire: Wire)
if wire.Connected then
onWireConnected(wire)
end
local connection: RBXScriptConnection = nil
connection = wire:GetPropertyChangedSignal("Connected"):Connect(function()
onWireConnected(wire)
connection:Disconnect()
end)
end
local function onEmitterAdded(emitter: AudioEmitter)
local wire = emitter:FindFirstChild("Wire")
if wire then
onWireAdded(wire)
end
emitter.ChildAdded:Connect(function(child)
if child:IsA("Wire") then
onWireAdded(child)
end
end)
end
local function onCharacterAdded(character: Model)
local emitter = character:FindFirstChild("AudioEmitter")
if emitter then
onEmitterAdded(emitter)
end
character.ChildAdded:Connect(function(child: Instance)
if child:IsA("AudioEmitter") then
onEmitterAdded(child)
end
end)
end
local function onPlayerAdded(player: Player)
local character = player.Character
if character then
onCharacterAdded(character)
end
player.CharacterAdded:Connect(onCharacterAdded)
end
local players = game:GetService("Players")
for _, player in players:GetPlayers() do
onPlayerAdded(player)
end
players.PlayerAdded:Connect(onPlayerAdded)
local function oldRollOff(from: Vector3, to: Vector3) : number
local distance = (to - from).Magnitude
local minDistance = script:GetAttribute("RollOffMinDistance") or 7
local maxDistance = script:GetAttribute("RollOffMaxDistance") or 80
if maxDistance <= minDistance or distance < minDistance then
return 1
elseif distance > maxDistance then
return 0
end
local linearGain = 1 - (distance - minDistance) / (maxDistance - minDistance)
return linearGain * linearGain
end
local function newRollOff(from: Vector3, to: Vector3) : number
local distance = (to - from).Magnitude
local gain = 4 / math.min(1, distance)
return math.clamp(gain, 0, 1)
end
local function updateCharacter(character: Model)
local primaryPart = character.PrimaryPart
if not primaryPart then
return
end
local emitter = character:FindFirstChild("AudioEmitter")
if not emitter then
return
end
local wire = emitter:FindFirstChild("Wire")
if not wire then
return
end
local fader = wire:FindFirstChild("AudioFader")
if not fader then
return
end
local emitterPosition = primaryPart.Position
local listenerPosition = workspace.CurrentCamera.CFrame.Position
local newAttenuation = newRollOff(emitterPosition, listenerPosition)
local oldAttenuation = oldRollOff(emitterPosition, listenerPosition)
fader.Volume = oldAttenuation / newAttenuation
end
local function updatePlayer(player: Player)
local character = player.Character
if character then
updateCharacter(character)
end
end
local function update()
for _, player in players:GetPlayers() do
updatePlayer(player)
end
end
while true do
update()
task.wait()
end
UseAudioApi
is currently opt-in, allowing time for you to try the changes out. We will strive to make it enabled by default in the near future, but your feedback and adoption are critical to making those changes. Today, if you do not enable VoiceChatService.UseAudioApi
, Voice Chat will retain the older implementation, and any AudioDeviceInput
created will not produce any sound.
Multiple Listeners
In the past, our system was designed with the assumption that SoundService:SetListener
would be the only audio listener in the world. However, with the introduction of AudioListener
s in the new API, you can now spawn multiple listeners. Each of these listeners perceives the world from a unique CFrame
, unlocking features like split-screen, portals, and more!
Post-Listener Effects
You can add effects after an AudioListener
hears things! Wiring such as AudioListener
> Effects
> AudioDeviceOutput
can be used to implement underwater filtering, or room-reverb on all 3D audio at once. Previously, this was only possible by carefully categorizing Sound
s into SoundGroup
s, and it was difficult to make a catch-all solution.
In-World Microphones
In the walkthrough, we used a single AudioListener
attached to a part – but you can create more AudioListener
s attached to Part
s, Model
s, Attachment
s, or Camera
s. In fact, you can take the signal heard by one AudioListener
, and rewire it to an AudioEmitter
to re-broadcast it, acting as a virtual microphone.
Beware, however, that feedback cycles can occur, and – just like in real life – it may not sound pleasant! To manage this, each AudioEmitter
and AudioListener
have an AudioInteractionGroup
. AudioListener
s will only hear AudioEmitter
s that are in the same interaction group.
Effects and Analysis
This API includes Wirable versions of the existing SoundEffect
s:
-
ReverbSoundEffect
→AudioReverb
-
CompressorSoundEffect
→AudioCompressor
-
EqualizerSoundEffect
→AudioEqualizer
-
ChorusSoundEffect
→AudioChorus
-
PitchShiftSoundEffect
→AudioPitchShifter
The existing SoundEffect
s get parented to Sound
s & SoundGroup
s, which only allows them to be applied in sequence. Wire
s offer more flexibility here, permitting effects to be chained sequentially, or applied in parallel.
A few of these instances also have considerable upgrades: AudioEqualizer
has movable crossover frequencies, and AudioReverb
has significantly more properties to tune. And, we’ve added new ones:
-
AudioFader
: a simple volume fader -
AudioAnalyzer
: can be used for analyzing volume and frequency contents
Known Issue
We are aware of an issue where AudioDeviceInput.AccessType
does not replicate correctly. This will be fixed within the next couple of weeks, but for the time being we recommend implementing team-chat permissions through connecting & disconnecting Wire
s
Please let us know if you encounter any additional issues in the comments, so we can address them!
What’s Next
While the Audio API and voice support are fully live and work in published experiences, this API is still in beta. We plan to extend it, and your input is crucial. If there are necessary functionalities, or other producer and consumer nodes you feel are missing — we’re all ears. We value your feedback as we progress out of beta.
I want to extend a big thanks to @Doctor_Sonar, @therealmotorbikematt, @GTypeStar, @wellreadundead, and countless others who helped design, drive, and implement this system. Our team is very excited to see this roadmap item start to come to life! Check out what else we have planned for the year on our Creator Roadmap.