Multi-client beat sync via. playback loudness

I’m currently working on a prototype for a bit of an unusual rhythm game.

Due to the fact I’d be handling mass amounts of songs, I’ve tried to resort to sound.PlaybackLoudness (though not perfect by any means) as a way of quickly “key framing” a song to find out when the music peaks, beats happen, etc. Then, using this info, syncing certain “game elements” (moving parts, GUI screens, lights) to the music.

The only problem (from what I’ve read on other dev-forum posts) is that PlaybackLoudness only reads a value when the sound is played on the client, not the server.

Which is a big issue for me as these “game elements” have a bit of random generation for their effects. In other words, when a “key frame” of a song plays- the synced element might not always be the same pattern for a light flash, or the same stable/repeatable effect. I rely on server → client replication to make sure these generated effects match across all players. As any sort of mismatch could badly mess up the game play.

If I can’t rely on server side scripts to generate “key frames” for these “elements” via the PlaybackLoudness though. That would mean I’d have to run it client side and get back any data I would need that way. (Right?)

Which brings up the confusing problem (to me) of bringing back this large amount of key frame data over the client-server boundary, and leaves me with these questions.

  1. What would be the best way to do that?
  2. How do I pick what client(s) to run the “key framing” script for the song on?

This is seriously important as the generated keyframes passing over from the client would be used to sync elements for every other player. If not done right the core game play would be toast.

  1. Is there any way to avoid these issues? (I mean am I going about this all the wrong way?)

Hopefully that’s a good enough explanation. Feel free to tackle this in any way you’d like, I’m honestly at a bit of a loss so any advice would be appreciated. Thanks in advance though to anybody who takes time to reply :smile:

If anything I tried to explain is confusing, please ask me questions and I’ll reply ASAP with more detail. I’d like to note I do have some code written, but I’ve chosen to leave it out for the moment and just have a more open discussion.

2 Likes

Initial thoughts:

  1. Simplest solution

    • Forgive me if I’m misunderstanding your post here but if you’re intending to sync effects across clients that would suggest that you’ve synced the time position of the audio playback?
    • If so, does the server have to be aware of anything other than the current timepoint of the song?
    • If it’s just to produce the random values, could you not generate those values beforehand and/or seed them by the timepoint? What kinds of things are you actually hoping to do here? More insight here might give us an opportunity to suggest other solutions
    • Imho, based on your current post: it sounds like you could just ensure that the audio timepoint is synced - to the best that you can get it - and then play the effects on each of the clients (since all clients should be getting approximately the same values at that point in time); and any time-related events that needs server involvement can stay there?
  2. If the server needs to be aware of the Audio’s PlaybackLoudness:

    • At what frequency are you hoping to play the effects? If there’s not thousands of them per audio clip, could you not generate them in advance?
    • This may actually be a more preferable option if you’re hoping to do anything more complex than reading the RMS
    • e.g. you could perform actual waveform analysis in some external software, record it as json and read from the values at runtime within Roblox from both the server/client based on the current timepoint
  3. If none of these are acceptable:

    • You may have to wait for AudioAnalyzer and the like to be released, more details here

Edit: Realised I missed your questions regarding client->server syncing:

I’m not too keen on this idea tbh, there’s far too many pitfalls - not to mention that there’s going to be significant latency for this approach. I’m somewhat doubtful it will work if you’re hoping to instantiate the effects whilst the song is playing in a reasonable amount of time. Though, as mentioned earlier, this kind of depends on what your actual usecase is.

If you’re desperate though, one thing you could do - although this really depends on how necessary this is for your game - is make all clients send you their PlaybackLoudness value at some desired interval:

  • Validate the PlaybackLoudness by comparing each incoming value against all the others received during that interval
  • Compute a valid PlaybackLoudness value across those values through some method, e.g. average/mean +/- SD etc, after throwing away invalid values
  • Use that value to compute the effect values before sending them back to the client
1 Like

You’ve pretty much understood everything spot on!

Everything like the effects / key framing is done live for the most part as the song plays out. (As I never even thought of doing the analysis prematurely and reading it back via json!). I definitely think I’m going to tinker with that a bit- as I’ve never really liked the idea of using PlaybackLoudness and only really used it for simplicity sake.

(Thanks for your link to the whole AudioAnalyzer by the way, I know it isn’t out yet but it’s cool to see the details on it!)

I think I got too lost in the idea of generating each effect separately one at a time. Put too much importance on some direct process of going from “key frame” → “effect generation”. That already would be super resource heavy, let alone replicating it to clients.

Your idea of a more seed based approach using the time point of the song is a way better alternative! It’d easily eliminate any need to send mass data over the client-server boundary, keeps the randomization I wanted, and allows me to do most the work on the player’s client. Which would hopefully fix any latency issues down the line.

Sorry for any gaps in explanation of what exactly I’m trying to do. I think it’d be hard to explain without first writing a full essay, which is also why I left my code samples out of the post.

Either way, you seriously helped clear the mental block I was having and gave me some great ideas to work with. Still a bit of a beginner programmer, so it’s awesome to read responses like yours.

Thanks for taking the time to respond! :hearts:

1 Like

Sorry for any gaps in explanation of what exactly I’m trying to do. I think it’d be hard to explain without first writing a full essay, which is also why I left my code samples out of the post.
… helped clear the mental block I was having …

We’ve all been there don’t worry, glad I could help!

Everything like the effects / key framing is done live for the most part as the song plays out. (As I never even thought of doing the analysis prematurely and reading it back via json !). I definitely think I’m going to tinker with that a bit- as I’ve never really liked the idea of using PlaybackLoudness and only really used it for simplicity sake.

It will definitely give you a lot more flexibility and accuracy than PlaybackLoudness until the AudioAnalyzer is out of beta that’s for sure.

The good thing about audio signal processing is that it’s quite popular, lots of libraries available in different languages but I remember thinking that LibRosa (Python) was pretty good - at least for my usecase at the time - as it was really well documented, quick search for beat detection for LibRosa revealed this example (alongside many more on that site) which may help you get started.

Good luck with the experimenting :slight_smile:

1 Like

Had to come back to comment on this, cause it’s just nuts

I’ve always relied on using just the tool-set that Roblox gives you with their studio resources, programming, and documentation- but stuff like this is a real eye-opener as to the pros of using outside tools/programs.

That’s some great stuff, thanks again!

1 Like

I had to come back for one last reply, because this is seriously the most useful info I have ever received from ANY type of online dev forum. It’s honestly unbelievable how you managed to link stuff to me that perfectly covers all my wants/needs,

Librosa was absolutely everything I could’ve ever asked for. Took me a little while to get used to library importing and other functions- as the documentation is a bit wordy. It’s worked wonders though- in literally 4 commands I can load any .mp3 I want to use, as well as extract all the data I want from it.

Which compared to the way I was going about this before, is a massive improvement.

I’ve even been able to streamline it down to a single script. Meaning I can simply do analysis on multiple files at once. Yet ANOTHER thing I wanted to be able to do easily.

Think I’ve definitely learned not to be so scared of outside programs/resources. This is nuts. Thank you again so, sooo much for your reply.

2 Likes