As a Roblox developer, it is currently impossible to detect or exhibit control over users utilizing voice chat (VC). By providing developers with this option, it would open up many opportunities and use-cases that developers currently cannot access.
Detection
What would be most beneficial are events to detect when VC input has begun/ended, a method to determine microphone state (receiving input, no input, muted), and a method to determine the volume of a user’s input.
Tooling
Additionally, it would be beneficial to provide developers with some degree of control over VC usage such as volume limitations/adjustments, deafening, permission to speak, and muting/unmuting. Tangentially related, it would be beneficial to include the overhead VC buttons in Enum.CoreGuiType. This would allow developers to toggle the UI via StarterGui:SetCore().
Important Notes
Muting/unmuting does not control a user’s microphone! It only allows users to choose whether or not they want to hear someone’s input
If overhead UI is entirely disabled via StarterGui:SetCore(), users would still have the ability to mute/unmute users via the People tab in the esc menu
Use-Cases
VC Input Event - UI to display active speakers
Microphone State Method - Check if a user is speaking, silent, or muted
Volume Method - Volume-based monster detection
Volume Limitations - Automatically reduce volume of loud users
I agree with these! These would be important and useful tools to have for developers.
I’m not sure if I like this. Yes its useful and allows for many features, but having developrs have control of the volume of a user seems weird. Same with force muting and unmuting, as people could be a in a middle of a conversation. I’m not necessary disagreeing, but I do think its weird if it does ever happen.
And lastly, I agree with this to allow custom gui set ups for voice chat.
Technically, I do believe a lot of this has been so-called planned for a while, see an announcement about it from 2 years ago. It seems this might actually finally release, as the APIs were just added last week, which would seem to support almost everything you would want regarding analyzing and adjusting user’s input and output. However, I don’t see any ability to customize the UI, which is disappointing.
@replicatesignal@LonnaHawk This is coming in the form of Audio APIs. Docs are very scarce atm due to it not being released yet, but it’s on the roadmap under Create rich and lifelike worlds.