One potential game that could be made is something that relates to players singing, and matching the voice to that, I also was working on a DJ booth in which players could hook up a controller to their output and control all the lights and things dynamically based on the audio spectrum.
Ah I see… could there be a way to make :GetSpectrum to return frequencies spaced evenly on a logarithmic scale, rather than linearly? Especially on lower frequencies, a lot of precision is lost.
The way I imagine Logic’s distortion working is it “ignores” frequencies out of the given range. Now that I think of it, though, I can imagine a setup that mirrors this!
I see what you mean here. I was just picturing a way to make sounds “float” in a players head without scripting 3D space movement, though maybe leaving that panning math to the engine is ideal! Might just be me being used to seeing the pan dial on every track in Logic.
could there be a way to make :GetSpectrum to return frequencies spaced evenly on a logarithmic scale, rather than linearly? Especially on lower frequencies, a lot of precision is lost.
Yeah I feel you – will check what’s possible
I was just picturing a way to make sounds “float” in a players head without scripting 3D space movement
I think if you use an Attachment
with a fixed offset from the listener, that can be done!
This is actually a really great update and change. Love the new UIs!
Awesome update! Roblox should add more useful updates like this!
Great job, especially the curve editor with all the fine tuning is a very welcome addition.
OMG this is one of the greatest updates to studio this year W roblox
So with this, do we ditch the native sound objects now? Are they an outdated away of producing sounds?
We’re not going to remove Sound
s, so it’s up to you – Sound
s, SoundEffect
s, and SoundGroup
s make heavy use of hierarchical, parent/child relationships to infer how they’re supposed to behave, and this prevents them from accomplishing some use-cases altogether – but if those use-cases don’t come up in your regular development, the existing APIs aren’t going anywhere!
This looks awesome! Can’t wait to see how it’ll be used in games.
Congrats on the team on realizing this massive effort.
Love the approach that you are going more and more visual to us as creators,!
Let’s go!! I’m guessing these aren’t available for publishing yet though?
Still, I’m super excited for TTS and STT whenever you get around to that!
These? Indeed, all of the documented API instances are ready to go and have been used by some already!
I would like if there was some ability to better synchronize separate AudioPlayer’s (Using different AssetIds) other than calling :Play() on them at the same time and hoping that they start at the same time. This was an issue on the old system that I would have to set the TimePosition of multiple sounds to try and synchronize them.
You should only need one audioplayer then you can use wires to connect it to speakers. If you have different assetids wouldn’t they have different sounds though? I don’t see how you would sync those audios if it is different sounds.
Hey @Rodj1, we are aware that sequencing, synchronization, and arrangement are really difficult with our current frame-locked APIs – we don’t have any firm plans at the moment, but this problem is not unique or exclusive to audio either; we know that this needs to be solved.
Proposed Solution: AudioSynchronizer
Instance
I’m considering a potential solution: introducing a new audio instance called AudioSynchronizer
. This instance wouldn’t accept any input but could connect output wires to multiple AudioPlayer
instances.
Properties
-
IsReady
: Indicates if one or moreAudioPlayer
is ready. -
IsPlaying
: Indicates if one or moreAudioPlayer
is playing. -
LoopRegion
: (Roblox can work best Implementation) -
Looping
: (Roblox can work best Implementation) -
PlaybackRegion
: (Roblox can work best Implementation) -
PlaybackSpeed
: (Roblox can work best Implementation) -
TimeLength
: Possibly includes two values representing the shortest and longest lengths. -
TimePosition
: Attempts to set all connectedAudioPlayer
to this position. If it exceeds the length, it sets to the end. -
Volume
: (Roblox can work best Implementation)
Methods
-
GetConnectedAudioPlayers
: Returns a list of connectedAudioPlayer
instances. -
Play
: Starts playback on all connectedAudioPlayer
instances. -
Stop
: Stops playback on all connectedAudioPlayer
instances.
Events
-
Ended
: Triggered when playback ends on a linkedAudioPlayer
. (AudioPlayer: AudioPlayer) -
Looped
: Triggered when playback loops on a linkedAudioPlayer
. (AudioPlayer: AudioPlayer)
TimePosition should be a method, but apart from that, solid idea! is there a benefit from this over just looping over each AudioPlayer?
Thank you for the suggestion @SillyMeTimbers – I can think of use cases for synchronizing updates to AudioPitchShifter.Pitch
or AudioEcho.DelayTime
at particular points in time; really any property change or function call that can be observed faster than the framerate might want to be synchronized, which makes this a pretty hairy problem.
We want to be a little careful that the solution doesn’t end up being too specific to play/stop – e.g. I mentioned properties of audio effects, but physics and video are some other engine subsystems that happen faster than the framerate and might benefit from tighter timing guarantees.
My use case is with attraction ride scenes. There would be speakers inside the car and inside the scene. The car audio would primarily be the score accompanying the scene, while the scene audio would primarily be the SFX.