New Audio API [Beta]: Elevate Sound and Voice in Your Experiences

[Update] February 28, 2024



Hello Creators!

We’re excited to unveil the beta release of our new Audio API, which includes highly anticipated controls over sound and voice that @Doctor_Sonar previewed at RDC!

Have you ever wanted to emit a single sound from various 3D locations simultaneously? What about altering players’ voices? How about implementing team chat or creating a functional walkie-talkie? Our new API finally empowers you with the control over sound and voice chat that you’ve been asking for — and much more!

To make this level of control possible, we’re introducing many new instances. But conceptually, these instances fall into three categories: some that produce audio streams, some that consume audio streams, and some — such as effects — that do both. Wires connect these up to form a processing graph.

The new API’s modular design marks a departure from existing Sound, SoundGroup, and SoundEffect instances. While the older APIs will remain active, the new API enables a suite of features that were previously impossible, like treating voice the same as any other audio source. This beta introduces new ways of thinking about sound design in Roblox. Some features may be familiar, but others — like the spatial simulation of voice audio and how it’s wired– may require some experimentation and getting used to.

Please read this post thoroughly before making changes to your existing experiences. Your feedback during this beta phase is critical as we continue to improve the API and make sound design even more powerful for developers.

To get started, follow the instructions below to learn how this API works and what has changed, then download this Audio API tutorial placefile to experiment with how your ideas may sound in practice. We are eager to see what you’ll build and are certain your work will be much cooler than any of the examples we’ve put together!

How to Use the Audio API

Getting Started

Before we begin, you must enable the “New Audio API” Beta Feature in Studio; this makes the new instances browsable and insertable.

The existing Sound instance functions as a file-player; however, if parented to a Part or Attachment, it will also behave as a 3D audio emitter that can be heard by your listener. Previously, to play the same sound from multiple 3D locations simultaneously, your only option was to duplicate the playback. Today, Wires make it possible to work with these three logically distinct components separately.

Let’s walk through an example of setting up a Public Address system.

Step 1: Create Some Emitters

Create some Parts or Models to act as speakers. Under each one, create an AudioEmitter instance. For organization, we’ve opted to put all of these in a folder.

image4

AudioEmitters are points in 3D space that broadcast sound into the world.

Step 2: Create Some Wires

Create one Wire per speaker, and set their TargetInstance property to the corresponding AudioEmitter. Wires can be anywhere in the DataModel, but we’ll add them as children of the AudioEmitters for organization.

image8

Step 3: Create an AudioPlayer

Create an AudioPlayer instance, and set the SourceInstance property of each Wire to it.

AudioPlayers can be anywhere in the DataModel, but for organization, we’ll add it to the same folder as our Speakers.

image6

AudioPlayers can load and play audio assets. Assign the AssetId property of the AudioPlayer to a suitable audio asset, and notice that its TimeLength and IsReady properties get filled in.

Step 4: Create a Listener

Create an AudioListener; we’ll parent it to another part. Then create an AudioDeviceOutput – this can live anywhere, but we’ll parent it to the AudioListener.

image10

Finally, create a Wire that connects the AudioListener to the AudioDeviceOutput, by setting its SourceInstance property to the listener, and TargetInstance to the device-output. AudioListeners pick up their surroundings and record a signal that can be wired to other nodes. To render what the listener heard, we wired it directly to an AudioDeviceOutput.

image5

Step 5: Try it out!

Add a script along the lines of:

local audioPlayer = script.Parent
audioPlayer.Looping = true
audioPlayer:Play()

As a child under the AudioPlayer – then press Play.

You should hear the same asset played from each speaker in sync – if you move the part hosting the AudioListener, it will be spatialized differently according to the part’s CFrame.

New Instances

In the walkthrough above, we demonstrated using AudioPlayer, AudioEmitter, AudioListener, and AudioDeviceOutput. These are just four of the instances that can be connected with Wires. The full list includes 15 instances:

image7 image1 image9

Example Use Cases

Working with Voice Input

To control voice with the beta Audio API, you’ll need to specifically enable it by navigating to Model > Advanced > Service, and inserting the VoiceChatService.

Doing so, you’ll notice two properties under VoiceChatService: UseAudioApi, and EnableDefaultVoice. UseAudioApi has three possible values: Automatic, Disabled, and Enabled. In this phase of the beta API, Automatic equals Disabled; we do this to prevent unintentional changes to your existing experiences.

Setting UseAudioApi to Enabled creates new opportunities for routing voices by allowing the use of AudioDeviceInputs. If EnableDefaultVoice is enabled, VoiceChatServices creates an AudioDeviceInput parented to each player, an AudioEmitter parented to each character model, and wires them together. Additionally, an AudioListener is parented to Workspace.CurrentCamera.

AudioDeviceInputs have a Player property, and can be scripted to determine whether they are Muted. See this example below:

Sample code for Muting a Player
local function setMuted(player: Player, shouldBeMuted: boolean) : nil
    local input: AudioDeviceInput? = player:FindFirstChild(“AudioDeviceInput”)
    if input then
        input.Muted = shouldBeMuted
    end
end

You can rewire them, adjust their properties, add effects, and more! If you don’t want any default voice behavior to be provided for you, the VoiceChatService.EnableDefaultVoice property can be turned off. As long as UseAudioApi is still set to Enabled, the behavior is entirely up to you.

In the fourth example in our Audio API tutorial, we used AudioDeviceInputs to implement chat via handheld radios:

Please note: In the new Audio API, AudioEmitters mimic real-world distance attenuation, meaning the volume fades as an AudioListener moves away from each emitter but doesn’t completely decay to zero. Previously, player voices were completely inaudible from more than 80 studs away. This means that if you opt-in to the new APIs and make no other changes, your experience will sound different: voices will be audible from further away. We strongly suggest experimenting and testing with multiple voice users at different distances to achieve your desired results. For greater control over audio rolloff we recommend using AudioEqualizers; you can find sample code in our API doc to get you started.

If you are interested in recapturing exactly the same behavior as before, you can use the sample code below:

Sample code for Recreating Older voice rolloff
​​local function wireUp(source: Instance, target: Instance) : Wire
	local wire = Instance.new("Wire")
	wire.SourceInstance = source
	wire.TargetInstance = target
	wire.Parent = target
	return wire
end

local function split(wire: Wire, effect: Instance)
	local source = wire.SourceInstance
	local target = wire.TargetInstance
	wire.TargetInstance = effect
	wireUp(effect, target)
end

local function onWireConnected(wire: Wire)
	local fader = Instance.new("AudioFader")
	fader.Parent = wire
	split(wire, fader)
end

local function onWireAdded(wire: Wire)
	if wire.Connected then
		onWireConnected(wire)
	end
	local connection: RBXScriptConnection = nil
	connection = wire:GetPropertyChangedSignal("Connected"):Connect(function()
		onWireConnected(wire)
		connection:Disconnect()
	end)
end

local function onEmitterAdded(emitter: AudioEmitter)
	local wire = emitter:FindFirstChild("Wire")
	if wire then
		onWireAdded(wire)
	end
	emitter.ChildAdded:Connect(function(child)
		if child:IsA("Wire") then
			onWireAdded(child)
		end
	end)
end

local function onCharacterAdded(character: Model)
	local emitter = character:FindFirstChild("AudioEmitter")
	if emitter then
		onEmitterAdded(emitter)
	end
	character.ChildAdded:Connect(function(child: Instance)
		if child:IsA("AudioEmitter") then
			onEmitterAdded(child)
		end
	end)
end

local function onPlayerAdded(player: Player)
	local character = player.Character
	if character then
		onCharacterAdded(character)
	end
	player.CharacterAdded:Connect(onCharacterAdded)
end

local players = game:GetService("Players")
for _, player in players:GetPlayers() do
	onPlayerAdded(player)
end
players.PlayerAdded:Connect(onPlayerAdded)

local function oldRollOff(from: Vector3, to: Vector3) : number
	local distance = (to - from).Magnitude
	
	local minDistance = script:GetAttribute("RollOffMinDistance") or 7
	local maxDistance = script:GetAttribute("RollOffMaxDistance") or 80
	
	if maxDistance <= minDistance or distance < minDistance then
		return 1
	elseif distance > maxDistance then
		return 0
	end
	local linearGain = 1 - (distance - minDistance) / (maxDistance - minDistance)
	return linearGain * linearGain
end

local function newRollOff(from: Vector3, to: Vector3) : number
	local distance = (to - from).Magnitude
	local gain = 4 / math.min(1, distance)
	return math.clamp(gain, 0, 1)
end

local function updateCharacter(character: Model)
	local primaryPart = character.PrimaryPart
	if not primaryPart then
		return
	end
	
	local emitter = character:FindFirstChild("AudioEmitter")
	if not emitter then
		return
	end
	
	local wire = emitter:FindFirstChild("Wire")
	if not wire then
		return
	end
	
	local fader = wire:FindFirstChild("AudioFader")
	if not fader then
		return
	end
	
	local emitterPosition = primaryPart.Position
	local listenerPosition = workspace.CurrentCamera.CFrame.Position
	
	local newAttenuation = newRollOff(emitterPosition, listenerPosition)
	local oldAttenuation = oldRollOff(emitterPosition, listenerPosition)
	fader.Volume = oldAttenuation / newAttenuation
end

local function updatePlayer(player: Player)
	local character = player.Character
	if character then
		updateCharacter(character)
	end
end

local function update()
	for _, player in players:GetPlayers() do
		updatePlayer(player)
	end
end

while true do
	update()
	task.wait()
end

UseAudioApi is currently opt-in, allowing time for you to try the changes out. We will strive to make it enabled by default in the near future, but your feedback and adoption are critical to making those changes. Today, if you do not enable VoiceChatService.UseAudioApi, Voice Chat will retain the older implementation, and any AudioDeviceInput created will not produce any sound.

Multiple Listeners

In the past, our system was designed with the assumption that SoundService:SetListener would be the only audio listener in the world. However, with the introduction of AudioListeners in the new API, you can now spawn multiple listeners. Each of these listeners perceives the world from a unique CFrame, unlocking features like split-screen, portals, and more!

Post-Listener Effects

You can add effects after an AudioListener hears things! Wiring such as AudioListener > Effects > AudioDeviceOutput can be used to implement underwater filtering, or room-reverb on all 3D audio at once. Previously, this was only possible by carefully categorizing Sounds into SoundGroups, and it was difficult to make a catch-all solution.

In-World Microphones

In the walkthrough, we used a single AudioListener attached to a part – but you can create more AudioListeners attached to Parts, Models, Attachments, or Cameras. In fact, you can take the signal heard by one AudioListener, and rewire it to an AudioEmitter to re-broadcast it, acting as a virtual microphone.

Beware, however, that feedback cycles can occur, and – just like in real life – it may not sound pleasant! To manage this, each AudioEmitter and AudioListener have an AudioInteractionGroup. AudioListeners will only hear AudioEmitters that are in the same interaction group.

Effects and Analysis

This API includes Wirable versions of the existing SoundEffects:

  • ReverbSoundEffectAudioReverb

  • CompressorSoundEffectAudioCompressor

  • EqualizerSoundEffectAudioEqualizer

  • ChorusSoundEffectAudioChorus

  • PitchShiftSoundEffectAudioPitchShifter

The existing SoundEffects get parented to Sounds & SoundGroups, which only allows them to be applied in sequence. Wires offer more flexibility here, permitting effects to be chained sequentially, or applied in parallel.

A few of these instances also have considerable upgrades: AudioEqualizer has movable crossover frequencies, and AudioReverb has significantly more properties to tune. And, we’ve added new ones:

  • AudioFader: a simple volume fader
  • AudioAnalyzer: can be used for analyzing volume and frequency contents

Known Issue

We are aware of an issue where AudioDeviceInput.AccessType does not replicate correctly. This will be fixed within the next couple of weeks, but for the time being we recommend implementing team-chat permissions through connecting & disconnecting Wires

Please let us know if you encounter any additional issues in the comments, so we can address them!

What’s Next

While the Audio API and voice support are fully live and work in published experiences, this API is still in beta. We plan to extend it, and your input is crucial. If there are necessary functionalities, or other producer and consumer nodes you feel are missing — we’re all ears. We value your feedback as we progress out of beta.

I want to extend a big thanks to @Doctor_Sonar, @therealmotorbikematt, @GTypeStar, @wellreadundead, and countless others who helped design, drive, and implement this system. Our team is very excited to see this roadmap item start to come to life! Check out what else we have planned for the year on our Creator Roadmap.

607 Likes
A game like Don't Scream on Roblox
Voice Chat API for Developers?
Release the Voice Chat Developer API
Adding voice filters to a player
Client does not have input option even while having a microphone
Audio Graphing: Managing the new audio apis with a graph
New AudioPlayer Instance not loading audios
VoiceChatSDK - Mute, Unmute, Set Volume & More!
Want to stream audio without using the Voicechat feature
VoiceTweaker | Layer your microphone with post-processing effects
Roblox Audio API Exits Beta: Enhanced Sound Controls Now Available
PA System using Voicechat?
VoiceChatSDK - Mute, Unmute, Set Volume & More!
2024 Year in Review: Buoyant Balloons, Fine-Tuned Audio, and More Grass!
Detecting Voicechat inputs
Speaking In Voice Chat Detection
Get Player Voice Chat Loudness
How to mute a player using the new Roblox Audio Api feature?
Adding the Audio API Roll Off Curve Editor [Beta]
CoreGUI Voice Chat Settings & UI not appearing in solo Roblox Studio playtests
How to make voice chat audio heared everywhere
How can i use Voice chat service?
How to test VoiceChatService beta feature
ViewportFrame.Adornee
Using the player's microphone, regardless of voice chat or not
Been always wanting to learning scripting but don't know how
AudioAnalyzer:GetSpectrum always returns an empty array for AudioDeviceInput instances
Multi-client beat sync via. playback loudness
Has the option to detect voice chat volume been implemented?
[FIXED] I need help creating instance sound to childrens. [SERVER SPEAKER]
Calculate all classnames a Instance may be classified as
Increase Voice Chat - Audio Listener Volume
Creator Roadmap: Spring 2024 Edition
Problems with using the new AudioAPI
Ability to jump to responses from Roblox Staff members
Playing Sounds from a folder with a Server Script Plays it Server Wide
Blocking VC voice
Detecting Player Voice Chat Volume
KasCode SFX Editor Pro (Plugin) - Easily enhance any audio
VoiceChat RollOffDistance
Audio Stage Pro - A powerful plugin for the new Audio APIs (BETA)
Help with voice chat?
Amplify Your Experiences with New Music
Check if player can hear a sound and how loud
Does the new audio api allow for voice playback
Voice chat audio to a part when a tool is equipped
Possible to get microphone input from voice chat?

This topic was automatically opened after 9 minutes.

Awesome update, as usual!

Does this come with any changes to the 50 maximum players for Spatial Voice?

78 Likes

Very exciting! I can’t wait for experiences to use it!

44 Likes

This is great!, sooner or later and we can have fully working phones / walkie talkies or even voice meters within Roblox. Can’t wait for it’s official release!

35 Likes

Seems very fun to mess around with, can’t wait to give it a try!

31 Likes

Been trying out the new audio api for the past few months and it’s really awesome! I’ve messed around with it and made a social experience that allows people to use voice chat filters:

I’ve also used it to make a plugin to help analyze audio streams:

96 Likes

allow us to create sine waves at any frequency we want with a function pweasseeee :pleading_face::pleading_face::pleading_face::pleading_face::pleading_face:

62 Likes

LET’S GO!! That’s what we’ve been waiting for!

31 Likes

Not sure if it was an intended change as it was not announced, but I’ve seen 700 player servers with voice.

30 Likes

What a welcoming change. Does this mean non-humanoid characters set via. player.Character finally support Spatial Voice and TextChatService?

30 Likes

The rake can finally make voicechat work with their radio, cant wait to see that happen eventually (rvvz doesnt care about the rake)

26 Likes

This new API is great! Glad to see improvements to audio after many years.

One thing I’d love to see is the ability to disable sound auto loading. Currently I store sounds as Configuration instances with attributes that match the Sound instance, then during runtime I use these configurations to create and cache sound instances. Then once I know that these sounds are no longer needed I remove them from my cache and let the engine unload them for me.

Sounds are a huge memory hog in games that want to have a diverse range of audio and having official support for this would be greatly appreciated.

26 Likes

Hey Tom_atoes; the new AudioPlayer instance has an AutoLoad property that you can disable to accomplish this!

29 Likes

This is beyond revolutionary! These additions open so many new routes for creative opportunity with sound in our games as well as pave the way towards far greater levels of realism!

19 Likes

They can actually do that currently using soundservice but u can’t listen in on multiple people at once!, this should solve that tho!!!.

20 Likes

That’s great to hear. I’m assuming AudioPlayers with AutoLoad set to false will automatically unload sounds after they haven’t been played for a while? I don’t see a method that allows manual unloading (even if it is automatic a method would be greatly appreicated).

Also a method for loading the sound without having to call :Play() would be very nice. My current usage for this is for maps in my game that have environmental sounds (which works well with sounds loading on :Play()), but my second usage is loading weapon sounds when the player picks up or is given a weapon. Having a gunshot sound only load when the player fires their first bullet will feel very unnatural.

26 Likes

Since this API is live, what does this mean exactly? Is it in beta in that its incomplete and will only be extended or should we anticipate breaking changes based on feedback going forward?

22 Likes

Does Audio Analyser read voice chat or just audios in general?
Because that’s simply cool and better, if you can clone the voice to be reused as a way to witness so called mimic

  • Does new audio instances run heavier and is there limit to how much audios played?
  • Does new audio instances suffers less from stuttering when there’s too much audio processing going on (ex: 30 sound groups with 3 sound effects all playing) in my game for example each audio is individually have processed reverb, echo, equalizer as a way to sound immersive based on where it comes from, but most limitations is when there’s more than 30 audios playing thats where it start audibly stutter, some optimisations were set to disable sound effect when its not needed
18 Likes

Actually with the First Video they have they show talking then they both pick up Walkie-talkies and one gets far away and the other moves around at a Distance and talks through the Walkie-talkie and you can hear them through it

15 Likes