We recently made an update that improves the frame rate and input latency on iOS, Xbox, Mac, and Android and will be rolling out to all other platforms in the coming weeks.
Part of this change involves processing input twice per frame:
Once at the start of the frame.
Once after the Heartbeat step.
No action is required unless you have implemented code that relies upon all input being processed before the Heartbeat step. If you have made this change, you would need to take action.
Please let us know if you have any questions or concerns below.
This change has caused issue with being able to “free” / unlock the cursor while the player is in first-person, for example, for a GUI.
Previously, it was possible on Heartbeat to update UserInputService.MouseBehaviour to Enum.MouseBehaviour.Default to achieve this. Now, specifically on Mac, when you attempt to move the cursor it will keep pulling it back to the centre of the screen.
As always, incredibly frustrating as it broke my game on production for all users on Mac, but additionally, there doesn’t appear to be a clear way to get around this without changing the CameraType temporarily to Scriptable, which has all sorts of other side effects.
Is there any clear solution for this, or how is it intended to work around this behaviour? Thanks chief.
What is the benefit of processing input after the Hearbeat? The heartbeat is the physics simulation and is the last thing that runs in the frame, wouldn’t the new input on the start of the next frame have more value? And if the concern was input latency, surely unlocking the FPS from 60 (at least increasing the maximum) would help more, no?
I’m aware that some developers tie some systems to the RenderStepped update without using the deltatime variable it passes, so it would vary based on framerate. Since the change affects inputs before heartbeat, it would also be a problem the developers would have to address if the FPS is unlocked. I believe unlocking the FPS, and allowing the user to set their own limit, no limit, or V-Sync framerate would have more benefits with the same need to restructure their RenderStep/Stepped (and modern version of those events).
No worries chief, I’ve already resolved this by now via a hacky method of manually controlling the Camera CFrame & disconnecting from the Humanoid, but I just would like to determine what is a more permanent path to resolving this issue I’m having. Happy to send over some code for you to help reproduce it, if needs be.
Less latency sounds good to me! One Q, is there or will there be a way I can test this change for my experience before it comes to PC? My exp is currently PC only.
About 2 years ago, an engineer tweeted out this diagram of frame ordering. So just to clarify, the block representing PreRender would be happening after PostSimulation and before Outbound Replication? Just trying to understand where PreRender fits into frame process ordering now. Or perhaps where the inputs fit into the diagram provided on the Task Scheduler page.
Most of my work is done through UserInputService (UIS)/ContextActionService (CAS) signals alone; when input is processed from those services, I then run other code. If we rely on any continuous action (such as a button being held down), our input manager will only signal to that continuous action to either start or stop; the continuous action itself will not check for input. Would this case qualify as needing to take action?
That’s correct regarding the new position of PreRender. We will update the documentation. I don’t think you need to take action but to be sure, you only need to try your code on one of the platforms where this change is active.
The best first step is to see if you have any problems on one of the platforms where this change is active. If necessary I can temporarily disable this change on your places.