I am creating a shooting range for my game that has FPS weaponry, and i’m trying to make a image of the target board that you hit with your shot and represent the point you hit it with an image of a bullet hole.
I have had no issues detecting when the target board has been shot but translating it has been quite tricky as I have not been able to get it to accurately map or even move in some cases.
I’m thinking you can get the relative position in 3D space, by subtracting the hit position and the target’s position (center), then you could possibly normalize it by dividing each component of that vector by each respective axis by 2
Then get the 2D target’s absolute position and using the X and Y from the normalized position you can do some math to get the right 2D position
WorldToScreenPoint accounts for GUI inset, but WorldToViewportPoint does not. If your ScreenGui has IgnoreGuiInset enabled, then you should use WorldToViewportPoint. If not, use WorldToScreenPoint.
When you call either function, you pass in the 3D position as an argument, as a Vector3. It will return a tuple: a Vector3 and a boolean. The Vector3’s X and Y components represent the 2D position whereas the Z component represents the depth. The boolean represents whether or not the point is actually visible. This means that it’ll check if the point is contained within your screen, not if it is behind a wall or not. If you wanted to retrieve the 2D position from this, you can do:
local pos, isVisible = workspace.CurrentCamera:WorldToScreenPoint(someVector3)
local pos2D = Vector2.new(pos.X, pos.Y)
Since you’re looking to use this for UI purposes, you can instead convert it to a UDim2:
local pos, isVisible = workspace.CurrentCamera:WorldToScreenPoint(someVector3)
local pos2D = UDim2.fromOffset(pos.X, pos.Y)