New Audio API [Beta]: Elevate Sound and Voice in Your Experiences

Hey @JavascriptLibrary, would you mind sending me a rbxl of your setup? I can check it out to see if I spot anything – that diagram looks like it ought to work

2 Likes

Can do, you can find the message here.

Also bumping this, would be extremely useful to have volumetric audio support - bit of a let down this isn’t implemented already.

2 Likes

Looking over some of the returns and I am interested why GetConnectedWires doesn’t guarantee a return of Wires, rather than Instances. Is there a specific reason for this?

1 Like

This was probably an oversight; we can refine the type annotation

2 Likes

I’ve been using the new API for a fairly complex project and it’s working great.

@ReallyLongArms Just a small request: Is there any way we could have an API that would let us keep multiple sounds in sync more precisely? I’d like to layer multiple music stems over each other, that can be dynamically controlled based on gameplay, and the current system makes that impossible to do quite right (there are other threads complaining about this same issue with the old system going back some years).

The current approach I have is to keep track of my own time and then check if the stems are deviating from where I expect them to be by more than some threshold, and then re-sync them, but although this stops significant drift, it doesn’t really solve the problem. I believe a fundamental issue is that TimePosition is only updated once per frame, and that setting it probably is also only applied the next frame. This means that my attempts to keep sounds in sync will always fail as I can’t know how long the current frame is going to be.

Solutions could either be functions to get and set the current time that are immediately applied or perhaps some way to schedule a sound to start at a precise moment. Unity for example has AudioSource.PlayScheduled.

3 Likes

Hey @Ed_Win5000, you’re totally right – synchronization & sequencing are really challenging on the platform today, because most of our engine systems can only make property changes or call methods during certain time-windows of each frame.

We’ve thought about several approaches, and part of what makes this so tricky is that a satisfying solution needs to cover not just Play/Stop, but also any property whose effects can be heard/observed more often than the framerate (e.x. Volume, PlaybackSpeed, Pitch, DelayTime, WetLevel, TimePosition).

I can’t promise anything soon, but we’re very aware of the pain points here

4 Likes

any reason why AudioAnalyzers are limited to a 512 point fft? bit tricky making any accurate audio visualiser since all the low end is a lot more prominent as each point is 46hz. is it a performance limitation? with optimised enough code you can easily hook up an AudioAnalyzer to a visualiser and even throw some extra mumbo jumbo ontop of that and still render in <1.5ms

2 Likes

I get an error when connecting the Microphone to the Listener .
I’m unsure if this is a bug or if I am doing something wrong.
Relevant post: (solved but the Analyzer:GetSpectrum() is always empty)

image
image

Hey @deseriaIiser, we’ve discussed adding an AudioAnalyzer.WindowSize property before and might still do that; it’s not a performance issue