Awesome! Glad that resolved it – sorry for the confusion – I’ll update the placefile to make more of these scripts run server-side.
If AudioDeviceInput.Player is assigned to someone else, it needs to be replicated – otherwise there’s no way for the other client to know about the voice connection
No problem! I appreciate that you’re helping clear things up to make it easier for everyone to test out the new Audio API
On that note, when you have the time and if you are willing to answer more questions, would you be able to clarify how AudioDeviceInput.AccessType works? There currently isn’t any documentation available and I’m a bit puzzled by it at the moment after making the following observations and conducting several tests:
Edit (March 4th, 2024):
Hooray! There’s now documentation for the properties and methods of the AudioDeviceInput, which clears up some of the questions I had originally outlined below.
However, it doesn’t answer all of my questions, because the most confusing issue I had observed still persists and is not clarified by the provided documentation:
Issue: If a client locally updates the AccessType from the default value of “Deny” to “Allow”, then swaps it back to “Deny”, the client will no longer be able to hear audio being transmitted from the AudioDeviceInput. This behavior occurs without the client or server ever specifying an AccessList using :SetUserIdAccessList() and without making any other changes.
It is mentioned in the documentation for AccessType that: “this property should only be assigned from the server in order to replicate properly.” However, even if the server forcefully sets the AccessType of the AudioDeviceInput to “Deny”, the client still isn’t able to hear audio being transmitted from the specific AudioDeviceInput that they had previously locally swapped back and forth between “Deny” and “Allow”.
For more detail regarding this issue, please refer to “Scenario #2” of the “Locally updating AccessType” section of this post, as well as the entirety of the “Server-side updates to AccessType” section of this post. Although I wrote both of those before there was documentation for the properties / methods of the AudioDeviceInput, the described issues are still relevant, because based on what is currently documented about the AccessType property, AccessType should only have an effect when a list of User IDs is specified.
General Observations
Click here for my original "General Observations" from March 2nd (prior to the Roblox Creator Hub documentation being updated).
When the AccessType property is set to “Allow” (0), no sound is registered from the input device (an AudioAnalyzer shows a consistent RMSLevel of 0).
When the AccessType property is set to “Deny” (1), sound is able to be transmitted.
This was quite confusing to me at first because I had expected the “Deny” enum to deny / prevent audio from coming through and for the “Allow” enum to allow audio to be transmitted (but it’s the opposite).
Locally updating AccessType
Scenario #1: Imagine there is 1 player in the server (Player1). This player has an AudioDeviceInput connected to an AudioDeviceOutput via a Wire. The Player property of the AudioDeviceOutput is set to themselves (Player1) so they are able to hear themselves in-game. However, let’s say the server updates Player1’s AccessType to “Allow”, which prevents Player1 from transmitting audio. After that, Player1 presses a TextButton that locally updates the AccessType from “Allow” to “Deny”, allowing them to transmit audio and hear their own voice in-game again.
I’m wondering if that is intentional, as this behavior differs from the Muted property, which the client doesn’t appear to be able to override when the server sets Muted to true.
Scenario #2: Imagine there are 2 players in the server (Player1 and Player2). Player1 is able to hear when Player2 talks. Next, Player2 presses a TextButton that locally updates their own AudioDeviceInput.AccessType from the default value of “Deny” to “Allow”. This makes it so that Player1 cannot hear Player2. However, if Player2 presses the TextButton again to update their own AudioDeviceInput.AccessType back to “Deny”, they still cannot be heard by Player1.
I tested Scenario #2 with the following setups and observed the same behavior:
With VoiceChatService.EnableDefaultVoice enabled. None of the AudioEmitters/Listeners were modified during this test; only the AccessType property of the AudioDeviceInput was changed.
With VoiceChatService.EnableDefaultVoice disabled. For each player, an AudioDeviceInput, a Wire, and an AudioDeviceOutput were all created on the server within the player object and hooked up to one another (Source is the DeviceInput and Target is the DeviceOutput). The Player property of the AudioDeviceOutput was set to the other player in the game. This means that the AudioDeviceOutput within the Player2 object had the Player property set to Player1, and vice versa.
Server-side updates to AccessType
Assuming an AudioDeviceInput is created on the server side, the server can update its AccessType property, which is replicated to every player in the server.
Using Scenario #2 from the previous section of this post, here’s another example. Player1 and Player2 have just joined the game and are able to hear one another. The server updates Player2’s AudioDeviceInput.AccessType to Allow. Player1 is still able to hear Player2.
A few seconds later, the server sets the AccessType back to “Deny”. Player1 is still able to hear Player2. After that, Player1 presses a TextButton that locally updates the AccessType of Player2’s AudioDeviceInput to “Allow”. Now, Player1 cannot hear Player2. When Player1 presses the TextButton once more to locally update the AccessType of Player2’s AudioDeviceInput back to “Deny”, Player1 is still unable to hear Player2.
Even if the server forcefully sets Player2’s AccessType to “Deny”, nothing changes, meaning that Player1 cannot hear Player2 until Player2 rejoins the session or until the existing AudioDeviceInput of Player2 is replaced with a new one.
This heavily confused me for a variety of reasons and I still can’t wrap my head around what’s causing this behavior.
Please let me know if there’s anything I can do or provide to make it easier to understand the information and answer the questions I’ve outlined in this post. Thanks!
It seems like the voice chat is functional within the studio, but when testing with multiple players, the audio cannot be heard by others. This makes it challenging to determine if everything is working correctly. To thoroughly test, one would need two actual accounts with verified IDs to test outside the studio, which is quite difficult to obtain. Will there be a way to hear other players within the roblox studio for testing purposes?
If you’re running a “Local Server” playtest with 2 or more clients, the clients won’t be able to hear one another, unfortunately. No matter if you locally enable or server-side enable the “Active” property for each clients’ AudioDeviceInput, and regardless if the IsReady property is replicated as being “true” between the clients in the playtest, AND even if each client is able to simultaneously transmit audio, it’s not picked up by any of the other clients in the local playtest.
While it’s still possible to use Voice Chat in 1-player solo playtests, it’s very unfortunate that it doesn’t work properly in multi-client local playtests since that adds extra layers of friction for solo developers wanting to test out Voice Chat features that typically require separate accounts with Voice Chat enabled to test.
Thus far, for all of the multiplayer Voice Chat testing I’ve done with the new Audio API, I’ve had to publish the game, leave Roblox Studio, then join a live server of that game from my computer on one account and from my phone on another account.
I had wondered if it would be possible to seamlessly host local playtests with multiple clients for testing out “multiplayer Voice Chat situations” ever since the “Chat with Voice Developer Beta” was announced in 2021:
Although the primary focus of my question back then was wondering if developers would be able to locally test Voice Chat functionality whether or not they verified their ID, it’s really unfortunate that it’s not currently possible to simulate having multiple clients communicate between one another with Voice Chat in Roblox Studio, even if the individual account being used for the multi-client local playtest is ID verified and has Voice Chat enabled.
Right so after playing around with the new audio API’s, i have come to the conclusion that the new systems right now are very hard to work with overall. It feels as if the new audio api was more designed on the idea that you’ll just set up some audio source and never touch it again past just deleting it once its no longer needed. Swapping in and out audio effects is unreasonably inefficient as were pretty much forced to constantly cycle through every wire and whatever else other instances needed just to make the entire process work. One idea i have to potentially fix this issue would be some sort of “Wire Hub” instance. Basically it would be an instance that lets you hook up multiple input wires and a single output wire just so you dont need to constantly rechain many other wire instances.
Overall i think the new system is great, it just feels as if its half complete (or at least half usable).
I’ve come across a bug where overtime players voices will be heard globally.
It’s a bit of a silly bug and I’m not complaining about it (but I know it can become annoying). I’m not sure if this has been reported yet & the voices can still be heard when the Roblox volume slider is at 0 (like in the video)
When we enabled this in our game we had users report they couldnt access it because it kept crashing. They said they had no error and it just closed the player window. Im not sure if its like this for all the users that were crashing, but some reported that it was not crashing in the windows store app, but was in the normal pc app.
Has anyone got AudioAnalyzer to work when connected to an AudioListener?
It seems to always return 0s and never pick up the audio, even if everything is properly wired up and audio is being picked up by the listener. I’ve also tried hooking the listener to a fader and the analyzer to the fader, but still no results
Turning off the EnableDefaultVoice property of the VoiceChatService is the way to disable proximity-based Voice Chat by default, since that property is what automatically creates the AudioEmitter and AudioListener within each player’s Character model (which are two of the primary instances that make it possible for players to emit / hear proximity-based Voice Chat audio when using the new Audio API).
I had initially thought there was going to be a built-in feature for it, too, but considering how several other features that were mentioned alongside that required scripts to interact directly with the new Audio API (e.g. modifying voice and walkie-talkies) it seems like push-to-talk might be a feature we have to code ourselves.
Fortunately, it appears that push-to-talk is fairly easy to implement Here’s a pretty barebones version I created for enforcing push-to-talk by locally enabling / disabling the Muted property of the player’s AudioDeviceInput depending on whether or not the player is holding down the specified keybind.
(Note that there’s more that would need to be added to this into this in order to make it mobile compatible, along with other quality of life features such as making it possible for players to specify their own keybind).
Push-to-talk Example
-- LocalScript in StarterPlayerScripts
local UserInputService = game:GetService("UserInputService")
local Players = game:GetService("Players")
local player = Players.LocalPlayer
local audioDeviceInput = player:WaitForChild("AudioDeviceInput") -- If you manually created an AudioDeviceInput with a different name, make sure to update this to the new name
audioDeviceInput.Muted = true -- Muting the AudioDeviceInput immediately since it starts out as being unmuted, by default
local pushToTalkKeybind = Enum.KeyCode.C -- Define the key that players have to hold down while talking in order to be heard by other players
UserInputService.InputBegan:Connect(function(inputObject, gameProcessedEvent) -- Function activated when the player interacts with the mouse, keyboard, etc.
if inputObject.UserInputType == Enum.UserInputType.Keyboard and gameProcessedEvent == false then -- Checks if the type of input was from a keyboard and makes sure that the player wasn't interacting with UI at the time (such as typing in a TextBox)
local keycode = inputObject.KeyCode -- Refers to the KeyCode of the InputObject, which will be used to check which key on the keyboard the player pressed
if keycode == pushToTalkKeybind then -- If the key that the player pressed matches the "pushToTalkKeybind" defined at the top of the LocalScript, then...
audioDeviceInput.Muted = false -- The LocalScript unmutes the AudioDeviceInput since the player is holding down the push-to-talk key
end
end
end)
UserInputService.InputEnded:Connect(function(inputObject, gameProcessedEvent) -- Function activated when the player stops interacting with the mouse, keyboard, etc.
if inputObject.UserInputType == Enum.UserInputType.Keyboard and gameProcessedEvent == false then
local keycode = inputObject.KeyCode
if keycode == pushToTalkKeybind then -- If the player let go of the push to talk key, then...
audioDeviceInput.Muted = true -- The LocalScript mutes the AudioDeviceInput since the player is no longer holding down the push-to-talk key
end
end
end)
Hey purpledanx; we made the GetConnectedWires method accessible to plugins for starters – did you have a use-case in mind for traversing the audio graph at runtime?
Will there ever be like Nodes? Where you can connect wires to a Node, and then connect to the Node to all the speakers or effects? I find it very bad to connect wires to a Emitter, Use a listener, and then wire all speakers around the map to this listener.
For example. Let’s say your making an intercom system and you want to make a cool sound before relaying the Audio. You could connect an audioplayer to the Node, play the sound, and then connect the players microphone to the Node.
I have a dynamic reverb system that gets every audio emitter and traces back to the audio modifiers to changes the values of the fader, reverb, gain, etc. So the only way I could trace back would be to find the connected wires.
Thanks for updating it and letting me know! The documentation provided there mostly clears up the questions I had.
I had actually been updating my original post for the past half an hour when I first noticed the documentation was updated but realized it doesn’t appear to answer all the questions I had.
I’ll quote the sections of my original post that describe behavior with the AccessType property that appear unintentional, given what is documented about it at the moment:
Ahh I see – for the time being you might be able to use CollectionService/tags to speed up lookup, but that’s not as general; I can see how GetConnectedWires makes this nicer.
In terms of moderation, how will users go about reporting people who say, use a broadcasting system, where their user isn’t shown but they are speaking inappropriate things.