LOL, what was that photo supposed to mean
HUGE update. super hyped for this
That’s to crazy booooooy
I think it’s because it’s in beta so means you can’t use it on your phone yet?
Finally a good update, is it also available for mobile users as well?
There shouldn’t be any issues with mobile users. If you encounter anything, please let us know!
DM me, would love to hear more!
Greetings Creators! There are a few updates since we posted last week:
-
AudioDeviceInput.AccessType
now replicates correctly. - We are aware of a crash that may occur when voice-enabled experiences approach the max server capacity limit of 50 players. Please ensure that you test your experiences’ servers with many players before going live.
- We will be temporarily disabling
AudioAnalyzer:GetSpectrum
while we investigate an issue. We’ll re-enable this after reviewing and optimizing its capabilities & performance.
We will update this post when we resolve these issues. Please keep your comments coming — this is a beta, and your feedback helps us address what you find faster!
Experiences above 50 Players stopped enabling the Microphone for me anyways. But not that fast I was able to like set it to 51 and have Microphone for a while until rejoining, idk.
But but but
why does the AudioEqualizer only have a MidRange, but no HighRange and LowRange.
The Low-range is implicitly defined to go from 0 to MidRange.Min
and the high-range is implicitly defined to go from MidRange.Max
to 24000hz – since these ranges don’t overlap, only the middle band needs to be “movable” in order to have full flexibility
You can also wire a couple AudioEqualizer
s together with different ranges & gains to create more complex filters!
Not working with PreloadAsync was not intended – we’ll get that fixed.
We can also add .Ended
and .Looped
events to AudioPlayer
Hey! Hope you don’t mind me asking but, how were the voice filters applied to the player’s microphone?
For this setup, it’s easier to do if EnableDefaultVoice
is set to false
and you create your own audio instances with a server script. In my place, there’s a server script that does this, and it also manages swapping between filters. The filters are just ModuleScripts that return a constructor function that returns a cleanup function. For example, the “Default” filter is this:
return function(from: Instance, to: Instance): () -> ()
local wire = Instance.new("Wire")
wire.SourceInstance = from
wire.TargetInstance = to
wire.Parent = from
return function()
wire:Destroy()
end
end
For that function, from
could be the player’s AudioDeviceInput
and to
could be an AudioFader
that is used for the player’s character’s AudioEmitter
. You wouldn’t want to
to be the character’s AudioEmitter
because the player can reset, which would break the connection.
All the server does is call the constructor function for the player’s current filter when they first join and keeps track of the current cleanup function returned from that function. When the filter is changed, the cleanup function is called and the new filter’s constructor function is called and its cleanup function is kept. Rinse and repeat. That’s all there is to it.
Writing custom distance attenuation for AudioEmitters has proven to be very difficult. Could you guys possibly hurry this along?
One of the things that bugs me to no end is that if you enable the Audio API, all voice conversations can be heard from across the map.
This is awesome! But will we ever get events for the AudioPlayer, especially one which can let us know when the AudioPlayer has completed playing a Sound.
Doing this setup, how did you manage the “hear myself” to only hear yourself locally if the microphone is setup on the server?
Not sure precisely how @YasuYoshida implemented it, but you could have a local script delete or re-add the Wire
coming from your own AudioDeviceInput
, so that you don’t always hear yourself. We do something like this out of the box when VoiceChatService.EnableDefaultVoice
is true
Thank you, I’ll look into it. Another question: how do I enable the Active
property of the AudioDeviceInput
? When VoiceChatService.EnableDefaultVoice
is false
, and I construct the AudioDeviceInput
myself, I have to manually enable the Active
property via the Properties panel in studio, as scripts do not have access to this.
Hello, I was trying the voice chat in my experience and I have had game crashes. I don’t know why it is because it simply closes and nothing is registred in the server console. Is there any error reported about that?
great update. (all of the audio is coming from the in game mic)
this can honestly make those club party games so much better, assuming all the players inside would have VC, since now we can have custom music interact with stuff
the example place was pretty hard to understand how to script it so i had to do trial and error until i understood it.
I’ve encountered the same issue with the walkie talkies / hand-held radios that other users have described:
Here’s a video I recorded using 2 separate accounts on different devices (a computer and a phone) showcasing this unintended behavior:
Note: During these tests, 1 account always had the microphone muted in-game (to ensure I didn’t talk in both microphones simultaneously since both devices were in close proximity to me), but that shouldn’t prevent the account from hearing audio being transmitted through the walkie talkies by the other account. To verify this, I later went into an empty game, turned on the EnableDefaultVoice
property of VoiceChatService
, and had 1 account mute the microphone in-game, and it was able to hear audio being transmitted from the other account (which was being routed through the AudioEmitter
from the other account’s Character model).
In case the unintended behavior demonstrated in the video happens to apply to other new Instances that have been introduced with this revamped Audio API, I have a related question about the AudioDeviceOutput
.
Use Case: I have been trying to create a calling feature where Voice Chat audio is routed directly from an AudioDeviceInput
to an AudioDeviceOutput
without needing to use proximity-based instances such as AudioListeners
and AudioEmitters
.
Currently, the AudioDeviceOutput
can be used to hear your own voice with the following setup:
-
AudioDeviceInput
(Player1) -
AudioDeviceOutput
(Player1)-
Player
property set to yourself (Player1)
-
-
-
SourceInstance
set to theAudioDeviceInput
-
TargetInstance
set to theAudioDeviceOutput
-
However, I then tried a modified version of this setup with 2 separate accounts in-game and not in Roblox Studio (where the Player
property of the AudioDeviceOutput
was instead set to the other player in the game so that they should be able to hear one another regardless of the distance between Character models) but neither account was able to hear each other. If this is intended behavior, clarification would be appreciated.
I brought this example up in case it’s a similar issue to what was demonstrated in the video with the handheld radios / walkie talkies.