Thanks for all the technical info, that solves a long standing issue I’ve had with floors leaking sound from each other since the sound objects radiate in a sphere.
that would lack some accuracy and be more complex for meshparts… not even considering the performance implications if your using this for alot of objects
would love to see a built in method to do this return!
that would lack some accuracy and be more complex for meshparts
There is a BasePart:GetClosestPointOnSurface method, which can be used to implement the getRandomPointInPart
function pretty elegantly; and that works for all part shapes!
To date, volumetric sound has only been implemented for simple shapes like Blocks, Balls, and Cylinders – if we extended it to meshes, we’d probably end up doing something similar under the hood
This does encounter sampling error, but AudioEmitter
s themselves are pretty cheap, so you can use a lot
Cool update roblox, thanks for releasing this!
So yeah it is.
I think it would be nice if we could define behavior through surfaces in a more physical manner, there’s many potential use-cases for this, I recall seeing a hack-week project about this a while back, I wonder if that actually was true acoustic attenuation/spatial audio or if it was more similar to the AngleAttenuation we have now.
Additionally, acoustic attenuation would also be nice because different materials handle sound differently in reality, and allowing developers the ability to define how different materials/parts attenuate would more or less make exactly what you’re describing easily possible, such as by defining a soundblocker part that lets absolutely no sound through. (Though, you could build a system that can get you close to this with some code in the current system. But I have my doubts about the potential performance of a script that implements this vs it being natively in engine.)
Hey @YuNoGuy123, that hackweek project was implementing occlusion & diffraction, where sounds that are behind walls get muffled or bend around corners. We’re actively working on bringing that to production (announced here)
Hopefully that will make some of these scenarios much more automatic – directional attenuation can still be used on top of that to constrain where things will/won’t be audible
Oh wow! I can’t believe I hadn’t seen that, that’s really cool!
Thanks for bringing this to my attention, I’ll be eagerly awaiting! :3c
Hello! I was attempting to use the GetWaveFormAsync
and play the audio right after it finishes. For some reason it will stop the audio, not sure why that is? It does not happen 100% of the time, but it will only happen when I use GetWaveFormAsync
Here is the code
local WaveForm = soundInstance:GetWaveformAsync(NumberRange.new(0, soundInstance.TimeLength), 100);
....
soundInstance:Play()
Thank you so much, I think I need to see further into the audio API, I appreciate the help also thank you for those amazing updates!
Hey @The_Pr0fessor, I tried something similar on my end and didn’t encounter any stoppage – are you able to share a small rbxl that reproduces the problem?
Im still confused about the new audio instances. Should we always use them instead of Sounds? Or only when its needed/more convenient?
This is all amazing stuff. But I wonder if it would be possible to port this feature (or some of the other effects currently only available by using the new API) into the standard “Sound” instance. Since it is kind of a pain to have to create numerous instances inside a part just to play a single sound meanwhile the original “Sound” instance just needs that and any effects are simply parented to it.
As to not break existing implementations the new attenuation properties could be added so that by default, the current simple min/max roll off distance and roll off mode values are used. But if you then added a checkbox called something like “AdvancedSoundRollOff”. Checking that would disable the original properties and instead replace them with the new directional and distance curve editors.
It would be nice if we had dynamic reverberation built into the game, similar to how Half-Life 2 computes the space volume and assigns a DSP to the audio.
I’ve noticed that use of waveforms works better on licensed audio from the catalog, unlike audio which was uploaded by user which just studers.
Audio 1: https://create.roblox.com/store/asset/15689439895
Audio 2: https://create.roblox.com/store/asset/15346731385
Both audio assets are same, however Audio 1 is a licensed version of the song from the catalog.
If needed here I’ll attach the asset here:
player.rbxm (11.6 KB)
Hey, why is DATA
(line 3) an empty table? I have tried various numbers for samples, initially 20x the song length, then I tried reducing the number range maximum to juuust lower than the song length. I dont think the wire and audio device output have anything to do with it either. All adjustments still returns an empty table. If there’s not anything wrong, then what should I change to make it work?
Here’s the model that includes everything related to the issue:
model.rbxm (9.7 KB)
Hey @S4ndieYT, this line is sampling the waveform at 3 places between 0 & 153.599
script.AudioPlayer:GetWaveformAsync(NumberRange.new(0, 153.599), 3)
so it would return a table with a maximum size of 3 – just a hunch, but this might be so spaced out that the audio engine encounters rounding errors – you could try upping the sample-count to something like 100 and see if the situation improves:
script.AudioPlayer:GetWaveformAsync(NumberRange.new(0, 153.599), 100)
If that still doesn’t work, then this line of code might be running before the audio asset is finished loading; so you could add a few extra lines beforehand to wait
local audioPlayer = script.AudioPlayer
while not audioPlayer.IsReady() do
task.wait()
end
local DATA = audioPlayer:GetWaveformAsync(NumberRange.new(0, 153.599), 100)
If neither of those reworks fix the problem we can look into it
Im still confused about the new audio instances. Should we always use them instead of Sounds? Or only when its needed/more convenient?
I would personally recommend using the new API if you have the option! It’s more flexible – and in many cases more performant – than Sound
/SoundGroup
/SoundEffect
. Though I totally get that existing projects might have a ton of code built on top of Sound
that’s hard to change.
Why?
The Sound
, SoundGroup
, and SoundEffect
APIs made several assumptions early in their implementation that ended up designing into a corner
Assumptions
-
Sound
can both play a file, and spatialize its audio stream - All 3d
Sound
s are assumed to be relative to one 3d Listener position -
Sound
can only belong to oneSoundGroup
-
SoundGroup
s can each only belong to one otherSoundGroup
-
SoundEffect
s can only be applied sequentially
Results
- emitting the same audio stream from multiple 3d locations required duplicating the file playback as well
- multi-perspective listening, such as split-screen or in-world microphones, was impossible
- smoothly transitioning sounds between different groups of post-processing effects was difficult
- the overall audio mix could only be arranged hierarchically
- effects couldn’t happen at the same time – for example it was possible to apply pitch-shift and then reverb, but it was not possible to layer a reverberated version of a sound with a pitch-shifted version of the same sound
The new API removes these limitations
How
- Uses smaller, more-modular building blocks
–AudioPlayer
&AudioEmitter
instead ofSound
- Routes audio with
Wire
s
– instead of parent/child relationships, likeSoundEffect
– instead of Instance-references, likeSound.SoundGroup
I see. So general rule now is new instances for 3d sounds, and old sound instances for “2d” sounds? (like gui sounds, background music, etc.)
The new API can be used for 2d sounds as well – you just wouldn’t need to use any AudioEmitter
s or AudioListener
s. You can wire an AudioPlayer
direct to an AudioDeviceOutput
to send it straight to your speakers
Hey @blanka, audio assets are compressed, so under the hood, GetWaveformAsync
has to
- seek to the beginning of the NumberRange
- decode a chunk of audio from the compressed representation
- write results into the table that it gives back to you
- repeat 2 & 3 until reaching the end of the NumberRange
All that is to say – the amount of time it takes to do 1 & 2 can vary depending on how the file was compressed, whether the file is short/long, and a bunch of other pretty random factors – this is the main reason it’s
Async
rather than instant.
The catalog assets might have got lucky in the compression lotto
Regardless, I think there are a couple tweaks that could make the stuttering better:
In your script, I see you’re analyzing the waveform once-per-frame in RenderStepped
, and requesting a 10-sample waveform for 100ms of audio at a time:
return audioPlayer:GetWaveformAsync(NumberRange.new(timePosition, timePosition + 0.1), 10)
it might take multiple frames for GetWaveformAsync
to finish, which could be causing the stutters you’re seeing – to remedy this, you could take more samples, and re-use those samples across frames; something like
if previousWindow and timePosition < previousWindow.EndTime then
return previousWindow.Samples
end
local startTime = timePosition
local endTime = timePosition + 1000
local window = {
StartTime = startTime
EndTime = endTime
Samples = audioPlayer:GetWaveformAsync(NumberRange.new(startTime, endTime), 1000)
}
previousWindow = window
return window.Samples
since each GetWaveformAsync
might take a while, this tries to make big batch requests instead of many tiny requests
Another thought: you could probably use AudioAnalyzer.RmsLevel
instead of GetWaveformAsync
here – AudioAnalyzer is not asynchronous, so it’s better suited for realtime analysis while an audio asset is actively playing – GetWaveformAsync
is intended more for offline analysis while an asset is not playing