New Audio API [Beta]: Elevate Sound and Voice in Your Experiences

I have actually used this and it is apparently not what I had in mind.
I was thinking that this could be used to play the same sound from all the cars in a multiple-car train, where the cars are connected together. (Copying the sound to all the cars would make it hard to change it later.)
However, I realized that what I wanted to do was practically impossible because AudioEmitter can only send one sound source.
Personally, I would have liked to have a mechanism whereby when a part containing the original sound source is connected to another part via Wire, the sound is played back exactly the same way on the other parts (even if the pitch is changed in the script, it will always be duplicated).

2 Likes

@ReallyLongArms Hello!
I’ve been running into an interesting dilema recently with PlaybackRegions and I was wondering if I could get some help on it.
We’re running into an issue where playing two files at once utilizing PlaybackRegions will sometimes not load properly after one another if there is decent space in between.

Are there any specific values or timeframes between when a playback region sound is called for and when it’s culled? It seems to be a really short time between and can cause sounds to choke up during gameplay.
Is there a culling difference between PlaybackRegions vs. Single playable files?
I remember that shorter files are loaded directly from memory.
When utilizing playback regions and a single file with multiple variations in it, are there any changes to caching?

2 Likes

If it is culling with the playback regions / audio, it seems overly aggressive at the moment.

2 Likes

Hey @panzerv1 – do you have a video or screen recording of the phenomenon you’re observing? We can take a look

@NTL331 can you clarify what you mean by this?

However, I realized that what I wanted to do was practically impossible because AudioEmitter can only send one sound source.

It should be possible to wire multiple AudioPlayers to one AudioEmitter to emit all of them, or wire an AudioPlayer to multiple AudioEmitters to emit from multiple locations. For example, if you had 3 audio files A, B, and C, and wanted to emit them from 3 different 3d locations, you could do something like this

3 Likes

I sent a private message regarding the question

3 Likes

What does getconnectedwires do? what is the pin argument in the getconnectedwires are?


theres no documentation for this can someone help?

1 Like

so what happened so Sounds?‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎

Hey @VeryLiquidCold, adding a description to this now – sorry for the inconvenience

:GetConnectedWires returns an array of Wires that are connected to a specific pin – most of the audio APIs have one "Input" and/or one "Output" pin; AudioCompressor has an additional "Sidechain" pin.

In the future we expect to add things that have way more pins, so we made this a string for forwards-compatibility

so what happened so Sound s?‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎‎

@Micamaster100 they still exist, and if they satisfy your use-cases you can feel free to keep using them – but, Sound is really doing multiple jobs under the hood (AudioPlayer + AudioDeviceOutput – and if it’s parented to a Part or Attachment, it also behaves like there’s an AudioEmitter and AudioListener)
The new API aims to let you mix & match for greater control

Hiya @ReallyLongArms ,
Assuming you are still dealing with issues regarding the audio system, I have a slight issue.

So I am using the audio system to extend the usage of voice chat, but the dilema I am running to is that not all players can hear eachother - even though wires are there, everything is unmuted and has an audible volume. The user id access list is utilised to control who can hear who, rather than destroying and recreating wires.

Here is a diagram with how it all functions:
paintdotnet_h9dGIbtLSp
Key
Input - AudioInputDevice
Analyser - AudioAnalyzer
Output - AudioOutputDevice
Fader - AudioFader (For volume control)

These are all created on the server and stored in ReplicatedStorage - it was previously stored (not created) in the player but this issue still persisted.
Input and Output have their .Player values set to their corresponding player and are not muted
Input access lists include the userids of all the users surrounding them
Fader is set to a volume of 3
Analyser is only used for a cosmetic part of a UI to display when a user is speaking (as it can be too far to see the overhead icon)

We have observed some issues between players when their experience control UIs are different, but this was fixed by disabling and enabling their microphone - though this is no longer relevant now with everyone on the new ui.

I have logs of all visible properties of all the audio related elements and I have observed no difference between them all.

An example of this would be:
(All players are in each other access lists and have valid connections)
Player1 can hear Player2 and Player 3
Player2 can only hear Player 1 - they should also be able to hear Player3, but cannot
Player3 can hear Player1 and Player2

I can provide a place file in a private message on request, as well as any more explanation.

1 Like

Hey @JavascriptLibrary, would you mind sending me a rbxl of your setup? I can check it out to see if I spot anything – that diagram looks like it ought to work

2 Likes

Can do, you can find the message here.

Also bumping this, would be extremely useful to have volumetric audio support - bit of a let down this isn’t implemented already.

2 Likes

Looking over some of the returns and I am interested why GetConnectedWires doesn’t guarantee a return of Wires, rather than Instances. Is there a specific reason for this?

1 Like

This was probably an oversight; we can refine the type annotation

2 Likes

I’ve been using the new API for a fairly complex project and it’s working great.

@ReallyLongArms Just a small request: Is there any way we could have an API that would let us keep multiple sounds in sync more precisely? I’d like to layer multiple music stems over each other, that can be dynamically controlled based on gameplay, and the current system makes that impossible to do quite right (there are other threads complaining about this same issue with the old system going back some years).

The current approach I have is to keep track of my own time and then check if the stems are deviating from where I expect them to be by more than some threshold, and then re-sync them, but although this stops significant drift, it doesn’t really solve the problem. I believe a fundamental issue is that TimePosition is only updated once per frame, and that setting it probably is also only applied the next frame. This means that my attempts to keep sounds in sync will always fail as I can’t know how long the current frame is going to be.

Solutions could either be functions to get and set the current time that are immediately applied or perhaps some way to schedule a sound to start at a precise moment. Unity for example has AudioSource.PlayScheduled.

3 Likes

Hey @Ed_Win5000, you’re totally right – synchronization & sequencing are really challenging on the platform today, because most of our engine systems can only make property changes or call methods during certain time-windows of each frame.

We’ve thought about several approaches, and part of what makes this so tricky is that a satisfying solution needs to cover not just Play/Stop, but also any property whose effects can be heard/observed more often than the framerate (e.x. Volume, PlaybackSpeed, Pitch, DelayTime, WetLevel, TimePosition).

I can’t promise anything soon, but we’re very aware of the pain points here

4 Likes

any reason why AudioAnalyzers are limited to a 512 point fft? bit tricky making any accurate audio visualiser since all the low end is a lot more prominent as each point is 46hz. is it a performance limitation? with optimised enough code you can easily hook up an AudioAnalyzer to a visualiser and even throw some extra mumbo jumbo ontop of that and still render in <1.5ms

2 Likes

I get an error when connecting the Microphone to the Listener .
I’m unsure if this is a bug or if I am doing something wrong.
Relevant post: (solved but the Analyzer:GetSpectrum() is always empty)

image
image

Hey @deseriaIiser, we’ve discussed adding an AudioAnalyzer.WindowSize property before and might still do that; it’s not a performance issue

1 Like

awesome, hoping the plan goes through.