Adding onto @DaDude_89’s suggestion, I do think some kind of splitter for the left/right stereo channels would be nice so you don’t have to juggle around a bunch of AudioEmitters/Listeners to get a sound that only plays in the left channel
This seems amazing, but admittedly I’m out of the loop. Should I replace all my Sound instances with AudioPlayers? I’m a little confused about how to work with it, because of Wires. Could somebody give me a basic idea of where I should use this, and where I should use Sounds? Thanks!
The audio API shines when you need to make complex setups where you route audio through various stages. Some example I can think of:
- A security camera with a working microphone that can hear everything in a room
- A walkie talkie that works with voice chat
Sounds, however, are much simpler to use and for most use cases, will be all you need for sound effects on tools, for example, or just background music.
this is amazing for custom zones audio, like changing the tone of the music underwater or in a jungle
The new audio API has been an insane boon for sound-driven experiences. I am much a huge fan of divorcing many components from that one single Sound instance, such as the ability to play one sound from multiple emitters instead of duplicating the sound constantly.
It’s really nice to see that the new Audio API is finally out of beta! I’ll definitely learn how it works and might use it for some of my games. However, there is a bit of an issue I am facing myself that I’m unsure if anyone else is facing, and this is exactly a bug or issue.
None of the objects from the Audio API are showing up in the object list for some reason, and only seems to appear when the Audio API beta toggle is enabled. Will this be fixed once the Audio API checkbox is removed from the Studio beta list?
Hey @int3rv4l
Indeed, you’ve caught us! What you’re experiencing is why we “shoe-horned” the comment about removing the checkbox option from the Beta Features. Trust us, the issue is only cosmetic and you should see it change very soon as the Studio updates roll through!
I know this isn’t too relevant, but if anyone happens to know the soundtrack in the demo videos please let me know.
@IdontPlayz343 in addition to what @DaDude_89 mentioned, I’d say the main scenarios where the new API is required involve branching signal flow
In the Sound
/SoundEffect
/SoundGroup
API,
- you can assign
Sound.SoundGroup
to oneSoundGroup
– picking a different one re-routes the sound altogether - you can parent a
SoundGroup
to one otherSoundGroup
– picking a different one re-routes the group -
SoundEffect
s can be parented toSound
s orSoundGroup
s; if there are multipleSoundEffect
children, theirSoundEffect.Priority
property determines the order they are applied in sequence (one after another)
In the new API, all connections use Wire
s, instead of parent/child relationships or reference properties – this supports both many-to-one (like SoundGroup
s) and one-to-many connections – so it also means that effects don’t necessarily need to be applied in-sequence; you could set up something like
- AudioReverb -
/ \
AudioPlayer - - AudioFader
\ /
- AudioFilter -
(sorry about the ascii art ) to apply effects in parallel to one another
Additionally, the new API supports microphone-input via AudioDeviceInput
, so you can use it to control voice chat in all the same ways as audio files!
I know this isn’t too relevant, but if anyone happens to know the soundtrack in the demo videos please let me know.
@iihelloboy this was made by @YPT300
:0
Will occlusion and what not be added to the sound/soundgroup instances? or will they be exclusive to this new API?
Whoops! Will get that fixed. Thanks for notifying
Sound Design was always something ROBLOX was really lacking at… this will surely help on that problem
I’m glad that this is finally out of beta, I enjoyed seeing the new features as they were being added. I hope eventually we get more of these extremely customizable in-depth apis.
On a side note, is anyone else unable to see the last 2 demo videos?
Can someone explain the difference between “AudioFilter” and the normal EQ?
Can sound/voicechat Memory get added to Stats | Documentation - Roblox Creator Hub, so that it can be called from GetMemoryUsageMbForTag
from Enum.DeveloperMemoryTag
?
I know you can get Sounds
as a whole but to my knowledge this would include other Sound types like Sound Effects as well.
For my game for example, I’d like to know how much Voice Chat lag is present since sound/voicechat
memory can have a huge impact and I’d like to be able to share that “VC Lag” on screen with Players so they know why they might be lagging.
AudioFilter is just one hand and has different options. Take a peek at the videos or better yet, play with it in Studio
my producer brain is going CRAZY
Anything that makes sound design more dynamic is a win for me, especially when current methods don’t have to be replaced with new ones (always dislike when that happens).
I might have missed something though, are directional sounds now possible easily without having to do a ton of obnoxious local scripting? Like a sound emitter/instance that plays sound, like a speaker, aimed in a specific direction (thinking, having one audio for the front of a jet engine and one for the rear, or a trumpet/horn blasting a certain way). I know it has been suggested before, but I have honestly lost track on most of this update and am unsure if it got implemented.
Not sure if this is an Audio API bug or something else, but sometimes on initial join, everyone is muted (for you) by default even when they’re talking, and the only way to fix this is to Esc then manually Mute then Unmute everyone.
The Green and Red bars indicate them talking, and the video showcases how I need to Mute and Unmute to hear voices.