New Audio API Features: Directional Audio, AudioLimiter and More

Thanks for the explanation, I’ve found AudioAnalyzer to work just fine with user-uploaded assets, so I’ll stick to that. Again thanks for explaining!

The new Audio API / Instances are amazing and I’ve already used AudioPlayer/AudioOutputDevice/AudioFaders to clean up my older audio setup! I feel I have greater control over the sound in my experience now.

When I get the chance, I want to look at these newer features as well - but I think it would be a lot easier if there was a written guide provided with examples/templates of which wiring setups to use for different use cases. The API docs currently are solid but a bit barebones and I had to do a lot of digging in devforum to figure out which setup to use.

Any word on volumetric audio for AudioEmitters? Really disappointing that it’s not already part of the new audio system

1 Like


So this being at 180 degrees make it only sound louder if you are behind the AudioEmitter? P.S. By the way the polar curve editor reminds me of a compass

1 Like

you can already do this, you just need a HRTF database to look up the attenuated values for x frequency at y angle.

1 Like

Please add the ability to pan audio or separate the audio listener into two channels so I can apply effects for the left and right channels respectively.

I mentioned this as a small part of another post but will say it here for visibility:

Are there any plans on adding in-world (3d) visualization for audio distance and angle attenuation. This would also be especially useful with the upcoming acoustic simulation stuff.

1 Like

music that’s licensed on the catalog isn’t re-encoded to vorbis, take a look at a file i pulled from the roblox cache and see how it still contains an fl studio encoder metadata tag

i assume roblox gets their songs straight from the music distributor like distrokid, monstercat etc. instead of re-encoding and compressing things like they do with user uploaded stuff.

if you wanna test with hq audio, you can use rbxasset:// inside studio with the desired file dropped into the content folder of studio like so

1 Like

Is 8192 samples really the maximum that can be set here? I tried 16000 and more, but the returned table never gives more than 8192 values.

@Dev_OnCake yeah we put a cap of 8192 values on the method – but we’re open the raising that. The main thing we wanted to prevent was decompressing the entire audio file into ram at its full resolution; 8192 is an arbitrary “big” capacity :sweat_smile:

I love the Audio API. It opens up the possibility of more immersive experiences and can set a new standard for audio design and the extent to which it can be used.

However, my main issue right now is the ~300 AudioEmitter cap, I have ~400 AudioEmitters in one of my games (since I’m mostly transferring over to the Audio API from traditional sounds) all of which are using different AudioPlayers, which I like to call “channels” for layman’s terms (some channels being looping, some being ambient noise, etc.) to immerse the player into the experience further.

While I don’t believe there is any point in the gameplay where all 400 AudioEmitters are playing or within reach of one another, users have already expressed concern about the fact that they cannot hear audio at all, which in turn can create a very problematic environment for a game that is very heavily focused on sound.

So I haven’t gotten the chance to properly use the new Audio API much over the last year since it’s beta was introduced, I finally decided to fully move over to it in my current project, and I’ve got some feedback I’d like to give!

My biggest enjoyment working with it so far has been the new audio effects, as well as the ability to wire them in any order. It was a little bit awkward to get used to it but once I understood the idea it is quite powerful and can give some great sounding results, particularly for environmental effects such as reverb and echo.

The modularity that wiring things to each other provides is really nice. It’s not often that we get this kind of specific control in Roblox and for the most part it works great, though it isn’t perfect.

AudioFader stuff.

The problem I have is that if I want to have AudioPlayers connected to AudioEmitters that need to use an AudioFader (used as replacements for SoundGroups), I have to:

1: Create a new AudioListener that only listens on a specific AudioInteractionGroup .
2: Make sure that the AudioEmitter has the same AudioInteractionGroup.
3: Wire the AudioListener to the AudioFader.
4: Wire the AudioFader to an AudioDeviceOutput.
5: Parent the AudioListener to the player’s Camera.
image
(Final result for a single AudioFader)

I need to repeat this process for every single different SoundGroup I had. Not only is this tedious, I need to do it in a script since as far as I know there’s no way to have a StarterPlayer thing for the Camera.

I think it would be best if we could have a relationship more close to how Sounds can just reference a SoundGroup in the old system (or maybe a way to have a “round trip” wire that sends something and then gets the expected signal back before sending it forward?), though I may be doing this wrong so if anyone knows a better way, let me know! Other than that, it’s been a pretty good switch so far! :+1:

Hey @YuNoGuy123; the workflow you described using AudioInteractionGroups with AudioEmitters & AudioListeners sounds right – that’s what 3d Sounds going into SoundGroups were doing under the hood :+1:

I need to repeat this process for every single different SoundGroup I had. Not only is this tedious, I need to do it in a script since as far as I know there’s no way to have a StarterPlayer thing for the Camera.

Yeah this is unfortunate. Camera does not replicate, so each client creates their own, and any children that the camera had at edit-time get lost :frowning: – we are not super happy about this, but it’s difficult to change; a lot of developers depend on the Camera not replicating in order to implement per-player state.

One trick I’ve used to make this less tedious is to set up all the listeners & faders in advance, at edit-time, and park them somewhere non-3d, e.x. ReplicatedStorage so that they don’t actually hear the world yet. Then, the only scripting needed would be re-parenting them to the camera at runtime – e.x.

local listener = script.Parent -- assumes the script is parented to the listener
listener.Parent = workspace.CurrentCamera

Once you set up one of these, you might even be able to treat it as a template, and just copy + paste it several times to speed up your workflow

1 Like

Thanks for the reply! I had decided to use an idea more or less the same as how you did, making a template of all the instances I needed for the Camera and then parenting that to the Camera, and honestly it isn’t too big of a deal, only just a slight annoyance I had for a little bit.

Just some stuff about Cameras.

Yeah I assumed that it wouldn’t replicate precisely because if you change the way it currently works then it could cause problems for other developers, though, I wonder if a new instance could be added under StarterPlayer called like StarterCamera or something, similar to how we can place a StarterCharacter model under StarterPlayer.

I think this would be fine for backwards compatibility, since if the StarterCamera is the same as the default Camera we know now, then it should be functionally identical if left unchanged by the developer. Though I’m sure there’s probably some implementation details here that I’m overlooking.

If you’ve got any more time to spare, I do wonder if there is a memory implication over this workflow over the previous Sounds and SoundGroups, since there’s more instances? It’s said that they’re more or less very similar under the hood so I imagine that there probably isn’t much difference if any, I haven’t been able to see any difference so far but if there is one it could matter somewhat on lower end devices, which is why I’m curious. :slightly_smiling_face:

Why do AudioPlayers not have a preview in properties like Sounds do? It makes things more tedious, having to start the game every time.

It also seems like pasting an asset id into AssetId does not automatically change it into the rbxassetid://id format.

I wonder if a new instance could be added under StarterPlayer called like StarterCamera or something, similar to how we can place a StarterCharacter model under StarterPlayer.

That’s a neat idea; we’ll discuss

I do wonder if there is a memory implication over this workflow over the previous Sounds and SoundGroups, since there’s more instances?

Each instance does come with a little bit of memory overhead, but it’s on the order of ~a hundred bytes; it can be higher if the instance has a lot of properties, but compared to the memory used by the audio engine internals, this is not really significant.

That’s a benefit of splitting the audio engine into many small pieces – since the old Sound instance did playback/emission/listening/output, you were paying for all of these pieces every time you clone one

Why do AudioPlayers not have a preview in properties like Sounds do? It makes things more tedious, having to start the game every time.

@BackspaceRGB instead of making a preview widget for AudioPlayer, we made AudioPlayer fully playable in edit-mode – you can tick IsPlaying to true, and hear the AudioPlayer in the editor, exactly as it would be heard in-game.

The Sound preview widget is pretty dated code; it heavily assumes that it’s working with a Sound (we tried it with AudioPlayer and it crashed), and it completely re-implements portions of the audio engine to work around the fact that Sounds don’t play in edit-mode :shock: – so we wanted to reuse as much foundation as possible, without furthering that tech debt.

That said, we do want to spruce up the properties widget a bit here, so that it’s easier to use than editing numbers/bools.

It also seems like pasting an asset id into AssetId does not automatically change it into the rbxassetid://id format.

This is my bad; we accidentally made AssetId a string instead of a Content. There’s a change currently in-review to deprecate the AssetId : string property in favor of Asset : Content – that fixes the autocomplete shenanigans

2 Likes

Any chance for the Preview sound option to be implemented under AudioPlayer instances? Not being able to preview sounds in studio is a significant downgrade from the old sound system imo. Nevermind missed this in your reply.

Also any update on Volumetric sound support for AudioEmitters? I know this has been addressed on this thread with the solution of placing multiple Emitters within attachments within a part but I find this extremely tedious to do (and also restricting due to the 300 AudioEmitter limit, for reference my usecase is for volumetric rain sounds and there can easily be 30+ rain parts active in workspace at a time), I’d much prefer just having AudioEmitter.IsVolumetric to perfectly recreate the volumetric audio effect available with Sound instances.

Another feature missing from this new audio system is support for SoundService.AmbientReverb, will this ever be supported?

1 Like

Also any update on Volumetric sound support for AudioEmitter s?

We don’t have firm plans at this time; some aspects of volumetric emission are made tricky by the presence of multiple listeners & angle-attenuation.

I know this has been addressed on this thread with the solution of placing multiple Emitters within attachments within a part but I find this extremely tedious to do (and also restricting due to the 300 AudioEmitter limit)

There is no limit on the number of AudioEmitters that can be spatializing audio at once, aside from maybe memory usage or CPU time if you get really wild :sweat_smile:

There’s a limit of 400 concurrent audio files playing at once, mainly to safeguard I/O-constrained devices – but e.x. playing one AudioPlayer that’s wired to many many AudioEmitters should be fine; the AudioPlayer is part of that budget, not the emitters.

We mentioned the multi-emitter approach to volumetric emission since it’s portable to all part shapes, and will continue to be supported for the foreseeable future – but I hear you that it’s pretty tedious.

Another feature missing from this new audio system is support for SoundService.AmbientReverb , will this ever be supported?

SoundService.AmbientReverb has been pretty buggy even with Sounds – so we are not supporting it for the new API – you can achieve the same results with greater customization by putting an AudioReverb before the AudioDeviceOutput, and we are looking at options to make editing an AudioReverb way easier

1 Like

My old sound effect system works like this:

  • Clone a sound from a SoundGroup in SoundService
  • Parent it to a part in workspace
  • Set the SoundGroup property on the sound to the SoundGroup that it was cloned from
  • Play sound

Is there a way to mimic this with the new audio instances that wouldn’t also require cloning an AudioFader for every AudioPlayer?

This is no problem for 2d sounds, but for sounds played in 3d space, it seems like you have to clone all the effects too.

It should be possible to create a very similar workflow – when you :Clone an instance, all of its children/descendants are also copied, so if you set up a couple listeners that have unique AudioInteractionGroups
Screenshot 2025-01-21 at 8.53.53 AM
these would be the equivalent to SoundGroups for 3d audio in the new API – these listeners can wire into further effects after they’ve heard stuff from the world.

You can have an AudioPlayer + AudioEmitter combo like so:
Screenshot 2025-01-21 at 8.54.33 AM
that only gets heard by e.g. Listener A (by setting its interaction group to match)

Whenever you clone Heard by A, it would also clone the AudioPlayer & Wire, since those are descendants – so you would just need to
• parent the newly-cloned emitter to a part
• play the child AudioPlayer

1 Like