You can write your topic however you want, but you need to answer these questions:
-
What do you want to achieve?
Change aim when the player moves the mouse (mouse being locked to the center of the screen) or drags on a mobile device:
uis.InputChanged:Connect(function(input, proces)
if not proces then
if aiming == true then
if input.UserInputType == Enum.UserInputType.MouseMovement or input.UserInputType == Enum.UserInputType.Touch then
crosshairx += input.Delta.X/1000 --insert magic number here
crosshairy += input.Delta.Y/1000 --insert magic number here
print(crosshairx, crosshairy)
end
end
end
end)
-- Crosshairx and y are later sent to the server, which in turn calculates the correct vector
-
What is the issue?
The way that everyone else is calculating screen size (get viewport size of the camera or create a random invisible frame that covers the entire screen) seems very inappropriate for the task at hand.
What if the user has OS-based UI scaling, then it will render (and probably have camera viewport size) different than the UI (and probably UIS) thinks it is.
Even if these always match, it feels like really bad practice to use rendering (camera) in input detection, and using UI feels very overkill. -
What solutions have you tried so far?
Did you look for solutions on the Developer Hub? Yes, I have talked about them in the “Problem” section. No one seems to use a solution which is actually intend for this. (except for something in the mouse section, but what about touchscreens)
also chatgpt told me that mouse position is always the same as screen size which i thought was funny