Hiwi, thank you for reading!
Just a question about the approach. Im making a playable piano. A sound is created when a player touch the key, and after some seconds it get destroyed.
Should I create those Sound Instances on Server, or Client side?
If I do it on client, should I fire an event for the other clients in order everyone can listen to it? On roblox hub, says that sounds on client replicates, (not all their properties)
(I dont understand crearly that warning…)
Right now Im doing it on Server, placing the instances inside the part, so it has a 3d atmosphere for the sound. I was planning just fire Server events from Client, so the Server Piano plays the notes.
This means that if you play a sound, any player that should near the sound’s position will have high volume, conversely, low.
Because remote events and function could cause some time delay, I believe you should trigger sound instances directly from the client, but should send remote event to the server at the same time, which adjusts other properties that are not replicated. I also am not sure what you mean by “sound is destroyed”. You probably already have this used, but I’ll say it just in case. An instance needs to be created only once at the start of the game. Sound only plays and stops.
I don’t know if your piano is a user interface keyboard not visible to other players or an object in Workspace. If it is an object, you can put the script inside the piano and don’t need to worry about remotes.
Yeah, so the “warning” just means, that Server is working, and doing that 3d atmosphere effect, that Im using.
I just want to know whats a better approach. Sounds in client or sounds in server. Cause. Client surely will be more accurate for each client. But kinda firing clients at the same and doing stuff, would take some time too, so would be similar like using only Server Sided Sound. And with server sided, the 3d atmosphere works automatically, instead of firing clients to adjust volume, doppler effect etc.
About the instance created only once, its difficult to do it like that, cause its 6 files of notes audios, each one contains X notes, overall 61 notes in 6 audio files, (by changing time position, obtaining each note for the right octave), if those overlap then it wont work, so Im creating a new sound, instancing it into the part everytime a player press a key, and then Removing that sound after 5 seconds. Creating a new one and destroying it once its finish.
Its and object and a user interface too, right now working with keyboard only, but it requieres client side for the GUI too
Ok, now I understand this better. Unfortunately, I don’t know what would be more efficient, but I guess you would also have to decide what is more practical as well as what to give priority to. Is it more important that client playing the piano hears the sound with minimized delay (of course here are downsides), or the automatic atmosphere adjustment?
Yup, you are right, and thats the point of everything.
Adjusting the atmosphere for each client could cause the same as only using server and let the delay to happen. With a better timing performance which is important for an instrument.
Even I dont know how it will work when server is running with many clients, and by creating and destoying many sounds at the same time, while moving the keys and stuff… Maybe I just need some tests to decide.
Thank you so much for ur opinions c: I still dont know what to do but I will keep testing
Yeah, to achieve an optimal solution, you’ll definitely have to measure timing. This is an interesting problem, so I’ll work on it. Maybe there is also another alternative to sound playing… I’ll think about it and let you know. Another question: Is there only one piano in game or multiple pianos? That is quite important aspect.
The idea, for the future is having more pianos, but, the main idea is adding more instruments using the same mechanichs, drums, strings (violin, cello) guitar, bass… Which probably lead me to use client side. But… as being impossible to sync clients to play something as a band, well… Just dreaming… Right now its just a piano for a lobby, maybe theres more on the game, but, those doesnt need to be in sync.
EDIT. What about muting the server for the player client, and making that player to listen from client side, and make the other players to listen the server side?
The player wont get any delay, so player can play smoothly, and its possible, server gets that signal in average sync to play for the other clients, preserving the auto volume and doppler and stuff
I need to add… that in the current way im using, just server side, works fine… and makes me to follow the timing (delay) im feeling, to adjust myself to play better… So I still dont know, with the delay its feels like a challenge to play correctly, but … idk…
It sounds like you have a good idea, the player should be the first one with optimized sound and hear it precisely. Could you explain what you meant with the other part (edit)?
Since you are maybe planning to add bands in the future, maybe metronome could come in use. This is just brainstorming, so it might sound weird, and the script for this could be pretty complicated, but you could measure player’s ping and store delays in tables, get average ping and use the calculated result to find the delay. When you have multiple players in a band with multiple instruments and delays, you could have the server play these sounds, but so that the starting point is common for everyone in the band. The instruments are synchronized, but other players just hear the music as they hear it normally, because the delay doesn’t really matter anymore.
Yeah!! sounds complicated at least for me. But its perfect the method you said!
Could be possible to meassure that ping and use that to delay instruments, thats a great idea! Sounds hard to be honest, but sounds like there is a great way to sync all band members.
What I meant on the edit, its just, that playing the piano with the delay caused by server, feels funny, like learning a new instrument, you have to adapt to the timing response of it, its feels kinda fun, to play with delay, and sometimes the delay adds a little beauty unexpected detail and ideas of what player is … playing… Idk how to explain it, once you listen the delay kinda is a fun challenge to adapt and make some progression or dealy on the chords you are playing
Thank you so much for adding some light into this topic, I’ll be thinking on that ping/delay fix for musicians and check how it works with the audience, thank you!!
No problem, it’s just the next step. I need a good ping reader script for my own project too, so I can paste it here when its finished and if you still need it. Likely, you can find pretty good ones here on this forum, if not, you can at least use them as a template. If I find some free time in my schedule, I’ll include average ping calculation storing and think of how this common starting point could come true.
I can imagine how the instrument sounds currently. So what you are working on now, if I understand this correctly, is muting the server for the client who is playing the instrument, and playing sounds directly before the rest of the code executes and for no one else except the given player to hear. That’s what I was thinking about too. But you have to find a way to mute music only and not other server sounds. And now I’m probably repeating your words.
EDIT. I misread some keywords about funny feeling when playing the piano on server. Why not leave it like that then. For me personally, it would maybe seem even more fun to play such piano. Of course, the piano should fit the game. If precision plays an important role (like in a band), then client should hear sounds before anyone else. On the other hand, having “the fun” piano around (not saying that the other one isn’t) would be cool.
Well, not muting the whole server for the client, just the server notes that are playing on it, and make the player client to listen his/her own notes on client timing. Making other clients to listen the server, maybe due to delay they would or not listen the same thing that the player is playing, but kinda walking in a good direction to achieve it
I will not work on this until I finish the main piano systems, hopefully tomorrow I will get deep into this.
Thank you so much for helping me on how to achieve a pingFix sync system too. I never tried that (yet) I will check it and learn it soon, thank you for releasing a fix to help us to understand better! c:
Use the server. Using the client puts your project at risk of exploiting.
This is an edit to the previous post, which contained an inaccurate statement. I apologize for that. @iGottic you are right, most likely the best way to do this is using the server. My only scruples are regarding to what extent can exploiters exploit here.
As long as FilteringEnabled (which is enabled by default), non of the newly by client created sound instances are replicated. However, custom sound IDs (in this particular case) can still be sent. Since the desired result requires creating new sound instances each time key is pressed, that has to happen on the server, and of course sanity checks have to be set up (in case exploiter tries to send any unlisted sound IDs or tries to spam invalid ones). This example doesn’t even demand checking sound IDs, because they are stored beforehand on server, hence player only needs to pass information about which key was pressed along with fired event. All this could affect performance and cause a delay for everyone too, so that is why the piano player might want to hear sounds directly after key is pressed. To prevent others from hearing the music played directly from client (the pianist), so called RespectFilteringEnabled (property of SoundService) can be used. This should be the best bet for anyone trying to stop sound exploits, as any sounds played by client are also heard only by given client.
Please correct me if I’m wrong.