I’d like to report a bug with the new Audio API. I’m not sure if anyone has already noticed or mentioned it, but if so, I apologize in advance.
The issue happens with the AudioPlayer instance, specifically with the AssetId variable. When I input an ID, it doesn’t work because the rbxassetid:// prefix is missing. However, when I use the Sound instance, I can input the ID, and it automatically adds the rbxassetid:// prefix.
Could you let me know if you’re planning to bring this feature back, or if it’s supposed to work but is currently bugged?
For some reason, the sound instance is not being picked up on audio listeners, while audio players are being picked up. This can be seen in this video.
I’ve uploaded the place file here for further inspection. soundtest.rbxl (115.5 KB)
Sound behaves like an AudioPlayer + AudioEmitter + AudioListener + AudioDeviceOutput all in one; it does everything from playing to spatializing & rendering a file, making it difficult to ‘intercept’ the signal flow without accidentally breaking something.
We would have liked to make this compatible with AudioListeners, but there are some subtle semantics especially when SoundEffects and SoundGroups are in the mix.
Currently, AudioListeners can only hear AudioEmitters
Setting the attenuation as a flat curve does not ‘envelop’ the listener the same way as the classic sound instance does when coming from a base part. This isn’t a work around for the original behavior, unfortunately.
I would love to use the new audio APIs but volumetric sound from parts is an absolute must. Most developers use volumetric sounds because we want an ‘ambient’ for a particular room or region in a map, and its a simple, no-code solution. To achieve something similiar with audio emitters/listeners, a no-code solution is imperfect and clunky. A scripted solution involving moving the emitter based on the listeners position relative to the basepart’s volume would work better but it just adds a lot of technical bloat to achieve a behavior that was already possible with the legacy sound instance.
I know you mentioned its technically tricky with the new audio system, but is there any hope of some form of volumetric sound being look at for the future? @ReallyLongArms Thanks, I appreciate all the hard work that went into this!
I am having trouble enabling these features. For some reason, I can’t seem to find a way to enable them through the beta tab. I have closed and reopened my studio several times, and even reinstalled it, but the issue still isn’t fixed. Any help?
Has anyone tried to set AudioDeviceInput’s AccessType property to Enum.AccessModifierType.Allow while getting voice chat to work ?
I tried this and put the user ID of every player using the SetUserIdAccessList method in the server, but no sound is produced at all. Even the player’s microphone button doesn’t have any green fill when talking to tell sound is being transmitted.
Or is that a bug ?
Hey @homermafia1, I tried making a small script and it seems to work on my end – one thing that might be surprising is that with Enum.AccessModifierType.Allow, a Player must belong to their own device’s UserIDAccessList in order to send audio up to the server – it’s not only a receiver-side access-control-list
I saw that the setting to disable voice chat was moved to the Roblox menu (I'm no longer able to disable VC - #4 by letsgoroger), but it looks like it only shows on computer devices. Is that setting getting added to mobile devices anytime soon ? It is extremely annoying to have to mute everyone.
It probably is some time in the future. They’re most likely keeping the two at the moment for compatibility sake since ALL roblox games are made on the prior API.
Agreed, @ReallyLongArms, we do really need volumetric audio (basically, to have the emitters use the size of the part they’re connected to like the Sound object does). Without this, the system just doesn’t work well with large continuous sound sources (e.g. a fire).
Hey @Ed_Win5000 – volumetric emission is a bit tough to support in the new api, because we now allow arbitrarily many listeners to be hearing emitters simultaneously; each one might perceive different degrees of “width”. The volumetric emission implementation for Sounds assumed one listener in this regard; and even then, was only able to support simple shapes like spheres, blocks, and cylinders.
In the new API, there’s the option to use multiple emitters for one AudioPlayer – this is something we didn’t have for Sounds. So for something like a bonfire, you could scatter dozens or even hundreds of emitters (emitters are cheap) around the surface of the fire, to get a sense of wideness. That approach would work for any part shape, too, not just simple ones!
There’s also a new AudioChannelSplitter instance, which could be used to take apart the channels of a multichannel bonfire recording, and pan them around individually
LongArms you legend
Finally we get a channel splitter!! Thank you. I don’t know how I didn’t notice it until now. Is there a place where I can look at change logs?
Hey @panzerv1; this was first mentioned in the 660 Release notes – there’s also a channel-mixer (combines several streams into one multichannel stream; i.e. the opposite of splitting) I commented an example showing how you can use that for stereo/left-right panning
We’re working on getting similar examples added to the docs page as well – note these aren’t fully live yet, but we should be flipping the flag in the next week or so