@IdontPlayz343 in addition to what @DaDude_89 mentioned, I’d say the main scenarios where the new API is required involve branching signal flow
In the Sound
/SoundEffect
/SoundGroup
API,
- you can assign
Sound.SoundGroup
to oneSoundGroup
– picking a different one re-routes the sound altogether - you can parent a
SoundGroup
to one otherSoundGroup
– picking a different one re-routes the group -
SoundEffect
s can be parented toSound
s orSoundGroup
s; if there are multipleSoundEffect
children, theirSoundEffect.Priority
property determines the order they are applied in sequence (one after another)
In the new API, all connections use Wire
s, instead of parent/child relationships or reference properties – this supports both many-to-one (like SoundGroup
s) and one-to-many connections – so it also means that effects don’t necessarily need to be applied in-sequence; you could set up something like
- AudioReverb -
/ \
AudioPlayer - - AudioFader
\ /
- AudioFilter -
(sorry about the ascii art ) to apply effects in parallel to one another
Additionally, the new API supports microphone-input via AudioDeviceInput
, so you can use it to control voice chat in all the same ways as audio files!
I know this isn’t too relevant, but if anyone happens to know the soundtrack in the demo videos please let me know.
@iihelloboy this was made by @YPT300