How to Detect Poor Performance?



So I was wondering if there was any possible way with a client script to detect if that client’s machine was struggling with certain effects like particle emitters?

For reference the game I’m developing is a first person shooter called Fray; and I’ve optimized network and memory usage to half that of CBRO or PF. But some users still complain about lag when shooting. The only thing that comes to mind is particle emitters.

The particle emitters are part of what makes the game look really well, so disabling them outright is not something I want to do. I also don’t want to make it a setting, because not all players will know to turn them off to improve performance.

Is there a way I’d be able to detect with a script if a player was lagging, and then automatically lower/disable the particle emitters for them?


You could probably do something with a local heartbeat loop; save the time every heartbeat (every frame essentially) and if the time between each heartbeat is too large, lower the particle effects.

edit: Heartbeat has a return value of the time inbetween frames

    print("Time between each loop: "..step)


That helps, but I thought RenderStepped returned the time between frames and Heartbeat was every two frames?


Heartbeat will change depending on the frame rate, and RenderStepped is every render frame (about every 1/60th of a second, can change depending on frame rate as well)

As stated on the wiki,

"The Heartbeat event fires every frame in the RunService . The step argument indicates how much time has passed between frames - usually around 1/60th of a second.

Please note that this will vary depending on the performance of the machine. If the game is only running at 40 FPS, that means that Heartbeat will fire 40 times per second, and the step argument will be roughly 1/40th of a second."

RenderStepped and Stepped also have DeltaTime, but I’ve usually preferred using Heartbeat.


Heartbeat and RenderStepped (and Stepped, for that matter) run once per frame. So they are both in lockstep with the frame pipeline. The difference between the methods is where they run in the pipeline (RenderStepped runs before rendering and actually blocks rendering until it is complete, Heartbeat runs after physics, Stepped runs before physics and after animations are updated).


I had realised my mistake right before you replied and have edited my post accordingly.


Well that really helps, but I am concerned about performance still. I would only really need the time between frames, right before enabling a particle emitter. To therefore say “This users framerate is too low” and not enable it at all. Is there an efficient way to retrieve the time between frames without changing a variable every heartbeat?


By using the DeltaTime value that Heartbeat, RenderStepped, and Stepped use, you can avoid changing a variable.

    if Step > 0.0332 then --If 30 FPS or lower


Well I guess I’m saying that I’m hesitant to add another function that runs every frame if I really don’t need it to. For example, I know I can call game:GetService(‘RunService’).RenderStepped:wait() which will introduce a one frame delay whenever called, but NOT run every frame.

I can’t put the code in an existing function of mine that runs every frame because I use BindToRenderStep() instead of RenderStepped:connect()


If the loop doesn’t have too heavy of a workload, then it shouldn’t affect performance.


Ok cool! Thanks for your help, I know back in the old days it was much more difficult to return a client’s framerate and it wasn’t always accurate.


Anything you’re using remotes for however should not be altered each frame, typically. That’s how you make a Bandwidth Muncher 2000


You should check out this article on the Wiki. I commonly use this to check and see if a client is lagging.


Actually, Fray rarely fires remotes. I’ve incorporated some very efficient methods in my netcode so if you play Fray, not only is memory usage drastically lower than other games, but network sent data is almost nothing (with occasional spikes for sending killcam data) and receive data never spikes above 50 Kb/s. This may seem to relatively normal, but look at games like broken bones or PF. Broken bones is averaging above 100 kb/s and PF’s lowest is like 45. I just don’t understand how those games even work, isn’t there a 60 Kb/s limit?


Yes, remotes have a 60 KB/s limit. It probably has to do with handling most of the work on the client (which is why a lot of exploits you see, such as infinite ammo and changing gun cycles from semi-automatic to automatic) are possible. The server controls the important business and the client does the visualisation or the bulk of the work that would normally stress the server. I wouldn’t know though, it’s just speculation.


You can try getting the ping of the user.


Yep. On my projects, I’ve begun picking apart Luanoid (which uses RemoteFunctions, ew) and other controllers and assembling it into my own custom character controller, so I’ll be running EVERYTHING on the server but visualization. Breaking exploits never felt so good…


Running less stuff in the server that replicate to the clients is also a good idea to decrease traffic, even if it’s not remote related, but it’s still developer related (something you as a developer can do). Also, it’s probably not a good idea to have the character controller on the server, if that’s what you meant.


It’s gonna be custom and the game will be ping sensitive so it shouldn’t be bad.