Hi, so I’m using the new AudioAPI, and I’m trying to make a visual voice chat on the corner of the screen (see video).
My question is, how can I detect if the localplayer can hear a specific sound and how loud it is so I can change the bars’ sizes?
Script if needed later on
local function GetMappedBins(BinCount: number) : {number}
local Bins: {number} = AudioAnalyzer:GetSpectrum()
if not Bins or #Bins == 0 then
local Empty: {number} = {}
for Index = 1, BinCount do
table.insert(Empty, 0)
end
return Empty
end
local Result: {number} = {}
for Index: number = 1, BinCount do
local BinPosition: number = math.pow(#Bins, Index / BinCount)
local Lower: number = math.max(1, math.floor(BinPosition))
local Upper: number = math.min(#Bins, math.ceil(BinPosition))
local Fraction: number = BinPosition - math.floor(BinPosition)
Result[Index] = Lerp(Bins[Lower], Bins[Upper], Fraction)
Result[Index] = math.sqrt(Result[Index]) * 2
Result[Index] = math.clamp(Result[Index], 0, 10)
end
return Result
end
local VisualiserSquence = Setup()
AudioPlayer:Play()
while task.wait() do
local Bins = GetMappedBins(#VisualiserSquence+8)
for Index: number, Bar: Frame in pairs(VisualiserSquence) do
local Bin = Bins[#Bins - Index + 1]
Bar.Size = UDim2.new(0,3,Bin*5,0)
end
end
Instead of using AudioAnalyzer on the stream before it goes into an AudioEmitter, you could try wiring each AudioListener to an AudioAnalyzer, measuring the volume after the listener has heard any emitted streams
Hi, I don’t know if you’ll answer this but I’ve been trying to do this since;
I get an error when connecting the Microphone to the Listener.
local function PlayerAdded(Player)
local Microphone: AudioDeviceInput? = Player:WaitForChild('AudioDeviceInput', 5)
if not Microphone then
warn("No microphone detected for", Player.Name)
return
end
local AnalyzerInfo = {
Name = Player.Name,
DisplayName = Player.DisplayName,
Image = Players:GetUserThumbnailAsync(Player.UserId, Enum.ThumbnailType.HeadShot, Enum.ThumbnailSize.Size420x420)
}
local CharacterAdded = function(Character)
local HumanoidRootPart: Part = Character:WaitForChild('HumanoidRootPart')
local Emitter = Character:WaitForChild('AudioEmitter')
warn('found')
-- Create AudioListener and AudioAnalyzer
local Listener: AudioListener = Instance.new("AudioListener", CurrentCamera)
local DeviceOutput: AudioDeviceOutput = Instance.new('AudioDeviceOutput', Listener)
local Analyzer: AudioAnalyzer = Instance.new("AudioAnalyzer", Listener)
Listener.Name = `{Player.Name}'s Listener`
ConnectDevices(Listener, DeviceOutput)
ConnectDevices(Listener, Analyzer)
ConnectDevices(Microphone, Listener)
--^ Listener doesn't get connected and throws the error
end
if Player.Character then
CharacterAdded(Player.Character)
end
Player.CharacterAdded:Connect(CharacterAdded)
end
Hey @ParadoxAssets, AudioEmitters and AudioListeners behave like virtual/in-world speakers & microphones respectively.
This means that AudioEmitters only have an Input pin – you wire stuff up to an emitter to beam it out into the world.
Similarly, AudioListeners only have an Output pin – they record their surroundings, and produce an audio stream which you can wire up to other things
So the reason that this line prints an error
ConnectDevices(Microphone, Listener)
is that it’s trying to connect a wire to the input of the listener; and that doesn’t exist
If both an Emitter and a Listener exist in the 3d world, the listener “hears” the emitted sound – but if you want to restrict this so that a listener only hears particular emitters, you can use the AudioInteractionGroup property, for example:
-- create an emitter and a listener with the same interaction group
-- so that the listener can *only* hear this particular emitter
local Emitter = Instance.new("AudioEmitter", Character)
Emitter.AudioInteractionGroup = Character.Name
local Listener = Instance.new("AudioListener", CurrentCamera)
Listener.AudioInteractionGroup = Character.Name
local DeviceOutput: AudioDeviceOutput = Instance.new('AudioDeviceOutput', Listener)
local Analyzer: AudioAnalyzer = Instance.new("AudioAnalyzer", Listener)
ConnectDevices(Listener, DeviceOutput)
ConnectDevices(Listener, Analyzer)
ConnectDevices(Microphone, Emitter)
Hi again! This worked but there’s a new issue with the Analyzer.
According to the posts, I’ve searched everywhere, and this seems to be a problem with the API.
Or it might be due to the fact that according to one of the staff’s posts :GetSpectrum() is disabled, but this was in February and it works for the test I’ve done with an AudioEmitter.
Spectrum Analyzer
task.spawn(function()
while task.wait() do
warn(Analyzer.RmsLevel, Analyzer.PeakLevel)
if CanPlayerHearSound(Analyzer, 0.01) then
warn('hearing')
print(table.concat(Analyzer:GetSpectrum(), ', '))
local Bins = GetMappedBins(Analyzer, #VisualiserSequence+8)
for Index: number, Bar: Frame in ipairs(VisualiserSequence) do
--local Bin = Bins[#Bins - Index + 1]
local Bin = Bins[Index]
Bar.Size = UDim2.new(0,3,Bin*5,0)
end
else
--for _, Bar: Frame in ipairs(VisualiserSequence) do
-- Bar.Size = UDim2.new(0, 3, 0, 0)
--end
end
end
end)
Yeah :GetSpectrum only returns an array if none of the AudioAnalyzer’s inputs come from an AudioDeviceInput – PeakLevel and RmsLevel still work for volume metering though; can you rework your visualization to use volume levels alone?