Audio Graphing: Managing the new audio apis with a graph

Introduction

IMPORTANT NOTICE
The audio api is fully out! Since the api has recieved updates, if you were using AudioGraph before this release, please see the new version and update it asap!

Hello Hello! Roblox has recently released their new and shiny Audio Api. This api allows you to do truly incredible things and gives you more control than anything we’ve ever had before! Unfortunately with this comes complexity, and a new concept that may not be all that familiar to roblox developers. This post aims to demonstrate how you can create high level audio graphs with ease.
Here’s an example of something we will create today:

This resource is for every level of scripter, new and experienced. By the end of this post you’ll be able to use my module to make and understand complex self managing systems like this in seconds with ease.

Understanding graphs

A graph is defined by wikipedia as:

an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics.

:blush: now you know! good luck making your own!

Kidding, but what are these things anyway?
Well a graph is abstract meaning it exists as an idea and isn’t a concrete existent object. It’s basically a representation in our heads of a given system, and in this case we’re using it for the audio api.

Graphs are made up of a two main things: nodes, and edges. A node is connected to other nodes with edges. This creates a web of different nodes and edges that are interconnected.

Here’s a real world example to think of:
Let’s say you have a roblox account. That could be represented as a User node. When you go into a game, you might meet a friend. When you friend this user, you are creating a Friendship, which we could consider to be an edge. You and the friend are now connected. We can also have Group nodes, these would be our roblox groups. When you join a group, you are connecting yourself to that group with an edge.

Applying graphs to Roblox’s audio api

Let’s define a few things before we continue:

  • Audio Instance: Any roblox instance that handles an audio stream excluding wires.
  • Producer: An audio instance that outputs, or produces, a stream
  • Modifier: An audio instance that takes a given input stream, and outputs a modified version of that stream
  • Consumer: An audio instance that only takes a given input stream
Specific Instances Producers:AudioPlayer, AudioDeviceInput, AudioListener

Modifiers: AudioCompressor, AudioFader, AudioDistortion, AudioEcho, AudioEqualizer, AudioFlanger, AudioPitchShifter, AudioReverb, AudioChorus, AudioFilter

Consumers: AudioEmitter, AudioAnalyzer, AudioDeviceOutput

With this new api, we are actually creating and managing audio streams ourselves with audio instances and wires. The nodes in this case are our Audio Instances, and the edges are wires. Some audio instances have inputs, some have outputs, and some have both. (Aswell as a possible sidechain that we’ll take a look at in a second)

In this case, our node types are Producer, Modifier, and Consumer respectively. Since consumers don’t output anything, they will always be at the end of a chain. This logic also applies to producers, they only output a stream, and will only ever be the start of any given chain. Modifiers need an input and an output, so they can only ever be in the middle of a chain. Wires make up our edges.

Here’s an example, now that we understand graphs.
image

The hard part

Since these instances act like a chain, they have to be connected at all times. If we deleted the middle modifier we would have to rewire the producer to the consumer. If we want to add a new modifier we have to unwire the first modifier from the consumer, wire it to our new modifier, and then wire that modifier back to the consumer.

These systems can just get so messy so fast. This is where my module comes into play. I was recently tasked with making an audio system for an upcoming update on a game I work for. In order to simplify the process of actually managing these graphs I created a module that does pretty much everything for you.

Understanding the module

Graph flow

AudioGraphs focus on the producer as the start of the graph,

Wire management

This module handles all wires for you, you dont need to worry about actually managing the graph itself.

Volume

Every graph has a built in :SetVolume() function that allows you to set the master volume of the entire graph. This modifier cannot be removed.

Branching

Sometimes, we will want to “branch off” of a modifier, and create a new chain on the graph that we can apply different effects to.

A branch treats its initial modifier as its own producer. Something important to remember is that chains can collapse. If you remove the modifier from the graph the entire chain will collapse, and you can decide whether it destroys the audio instances or just removes the wires.

Here’s an example:

--localscript, starterplayerscripts
local ReplicatedStorage = game:GetService("ReplicatedStorage")
local Audio = require(ReplicatedStorage:WaitForChild("Audio"))

local MicInput = Audio.NewGraph(Instance.new("AudioDeviceInput"))
MicInput.ProducerInstance.Player = game.Players.LocalPlayer
	
local Emitter = Instance.new("AudioEmitter", game.Workspace:WaitForChild("EmitterPart"))
MicInput:ConnectConsumer(Emitter)
local MicReverb = MicInput:CreateModifier("AudioReverb")

local ChorusEmitter = Instance.new("AudioEmitter", game.Workspace:WaitForChild("VocalsEmitter"))
local ChorusBranch = MicInput:Branch(MicReverb)

ChorusBranch:CreateModifier("AudioChorus")
ChorusBranch:ConnectConsumer(ChorusEmitter)

Ducking

Ducking is a form of compression where one audio “ducks” around another. You’re basically making one audio have priority over the other, so that you can clearly hear something over another. Ducking is very common in voiceovers with background music, allowing for clarity in the voice without changing the volume of either sound. This happens by using an AudioCompressor. In order to actually connect one sound to the other, the AudioCompressor actually accepts a third pin: sidechain. A sidechain is a type of input that the compressor uses in order to duck audios.

You can duck audios from any audio instance that outputs a stream.
Example:

--localscript, starterplayerscripts
local ReplicatedStorage = game:GetService("ReplicatedStorage")
local Audio = require(ReplicatedStorage:WaitForChild("Audio"))

local MicInput = Audio.NewGraph(Instance.new("AudioDeviceInput"))
MicInput.ProducerInstance.Player = game.Players.LocalPlayer
	
local Emitter = Instance.new("AudioEmitter", game.Workspace:WaitForChild("EmitterPart"))
MicInput:ConnectConsumer(Emitter)

local BackgroundPlayer = Instance.new("AudioPlayer")
BackgroundPlayer.AssetId = "rbxassetid://9046863253"
BackgroundPlayer.Looping = true
BackgroundPlayer:Play()

local BackgroundGraph = Audio.NewGraph(BackgroundPlayer)
BackgroundGraph:ConnectConsumer(Emitter)
BackgroundGraph:Duck(MicInput.ProducerInstance)


And here’s what the graph we created looks like:

Conclusion

I really hope this helped you to understand and conceptualize how to manage and use these new audio instances roblox has created. If you run into any issues with the module please do let me know! See this post for a full api reference and download to my module before using it so that you can fully understand how to use it.

12 Likes

How would you go about handling this for multiple audio emitters?

Very awesome work! Thanks a ton for this.

1 Like

This api automatically manages and supports multiple consumers. You just call :AddConsumer() on a graph and it’ll add that consumer to the bottom. You can see an example of this in the beginning of the tutorial.

image

2 Likes

A new version of this module has just come out, please see:

It addresses painpoints and takes full advantage of the newest version of the audio api.

1 Like