The Low-range is implicitly defined to go from 0 to MidRange.Min and the high-range is implicitly defined to go from MidRange.Max to 24000hz – since these ranges don’t overlap, only the middle band needs to be “movable” in order to have full flexibility
You can also wire a couple AudioEqualizers together with different ranges & gains to create more complex filters!
For this setup, it’s easier to do if EnableDefaultVoice is set to false and you create your own audio instances with a server script. In my place, there’s a server script that does this, and it also manages swapping between filters. The filters are just ModuleScripts that return a constructor function that returns a cleanup function. For example, the “Default” filter is this:
return function(from: Instance, to: Instance): () -> ()
local wire = Instance.new("Wire")
wire.SourceInstance = from
wire.TargetInstance = to
wire.Parent = from
return function()
wire:Destroy()
end
end
For that function, from could be the player’s AudioDeviceInput and to could be an AudioFader that is used for the player’s character’s AudioEmitter. You wouldn’t want to to be the character’s AudioEmitter because the player can reset, which would break the connection.
All the server does is call the constructor function for the player’s current filter when they first join and keeps track of the current cleanup function returned from that function. When the filter is changed, the cleanup function is called and the new filter’s constructor function is called and its cleanup function is kept. Rinse and repeat. That’s all there is to it.
This is awesome! But will we ever get events for the AudioPlayer, especially one which can let us know when the AudioPlayer has completed playing a Sound.
Not sure precisely how @YasuYoshida implemented it, but you could have a local script delete or re-add the Wire coming from your own AudioDeviceInput, so that you don’t always hear yourself. We do something like this out of the box when VoiceChatService.EnableDefaultVoice is true
Thank you, I’ll look into it. Another question: how do I enable the Active property of the AudioDeviceInput? When VoiceChatService.EnableDefaultVoice is false, and I construct the AudioDeviceInput myself, I have to manually enable the Active property via the Properties panel in studio, as scripts do not have access to this.
Hello, I was trying the voice chat in my experience and I have had game crashes. I don’t know why it is because it simply closes and nothing is registred in the server console. Is there any error reported about that?
great update. (all of the audio is coming from the in game mic)
this can honestly make those club party games so much better, assuming all the players inside would have VC, since now we can have custom music interact with stuff
the example place was pretty hard to understand how to script it so i had to do trial and error until i understood it.
I’ve encountered the same issue with the walkie talkies / hand-held radios that other users have described:
Here’s a video I recorded using 2 separate accounts on different devices (a computer and a phone) showcasing this unintended behavior:
Note: During these tests, 1 account always had the microphone muted in-game (to ensure I didn’t talk in both microphones simultaneously since both devices were in close proximity to me), but that shouldn’t prevent the account from hearing audio being transmitted through the walkie talkies by the other account. To verify this, I later went into an empty game, turned on the EnableDefaultVoice property of VoiceChatService, and had 1 account mute the microphone in-game, and it was able to hear audio being transmitted from the other account (which was being routed through the AudioEmitter from the other account’s Character model).
In case the unintended behavior demonstrated in the video happens to apply to other new Instances that have been introduced with this revamped Audio API, I have a related question about the AudioDeviceOutput.
Use Case: I have been trying to create a calling feature where Voice Chat audio is routed directly from an AudioDeviceInput to an AudioDeviceOutput without needing to use proximity-based instances such as AudioListeners and AudioEmitters.
Currently, the AudioDeviceOutput can be used to hear your own voice with the following setup:
However, I then tried a modified version of this setup with 2 separate accounts in-game and not in Roblox Studio (where the Player property of the AudioDeviceOutput was instead set to the other player in the game so that they should be able to hear one another regardless of the distance between Character models) but neither account was able to hear each other. If this is intended behavior, clarification would be appreciated.
I brought this example up in case it’s a similar issue to what was demonstrated in the video with the handheld radios / walkie talkies.
I love this new API, however, I’m experiencing issues with the AudioAnalyzer. When I call GetSpectrum() it returns an empty array for every audio stream I give it. It also doesn’t work on the tutorial place file:
The peak and rms levels seem to work completely fine though.
Does the AudioAnalyzer work on the server? I’m trying to make an npc be able to hear players using it but the PeakLevel property doesn’t seem to change.
Is there a way to easily play the same sound and have it overlay easily? For example: when firing a weapon you would want to play the same sound over and over again and have them overlap each other. This is currently very tedious to do with the new API.
Oops; I think I see the issue – in the tutorial place, there’s a RadioChat script under Players that spawns an AudioDeviceInput for each player, but it has Client RunContext, so it only allows you to hear the one spawned for yourself.
I believe just switching that to Server RunContext might work – though I would need to check if there are any client/local dependencies in the script.