For anyone else who runs into problems using :WorldToViewportPoint() with ViewportFrames and their cameras, here’s what I found:
TL;DR
When calling on the Roblox API method :WorldToViewportPoint() on a camera that is created, it behaves differently than when it is called on the default camera (workspace.CurrentCamera).
When called on a camera that is created (not the default camera):
-
The X & Y values of the Vector3 returned by the WorldToViewportPoint should be treated like UDim2.Scale values that are relative to the AbsoluteSize of the ViewportFrame.
This means that this is how you calculate the 2D position of a 3D object inside of a ViewportFrame:
Where vpFrame is a ViewportFrame and ScreenPosition is the Vector3 returned by WorldToViewportPoint.
-
If the ScreenGui you’re using has IgnoreGuiInset set to true, you need to manually add a piece of code to take into account the gui inset. Example:
From the solution to this post, I was able to understand that the 2D point you get from the :WorldToViewportPoint method being called on a created camera uses decimals, numbers that are more fitting for scale rather than offset.
I then asked why this is.
From this post, I understood that the ViewportSize property, a read-only property, of the default camera is the resolution of the player’s screen, while for a created camera, it is 1:1. The method was returning a 2D position relative to the camera’s ViewportSize.
With all this in mind, when calling the :WorldToViewportPoint() method on a camera under a ViewportFrame, I had to calculate the 2D position using the returned value of the method (ScreenPosition) as a scale value.
At this point, the gui objects representing the 3D points in the ViewportFrame converted into 2D points that could be represented on the player’s screen was, well, actually on the player’s screen now. However, the points were slightly off, as shown in this image:
Note that the ScreenGui in which the gui objects are parented to has IgnoreGuiInset set to true. I noticed that with how my code was as it was, if I set IgnoreGuiInset to false, the points would then align as I intend, as shown in the image below:

However, I need IgnoreGuiInset to be set to true, so I added this block of code as a workaround:
I think this is expected behavior for the method, though, because it says on the
documentation for WorldToViewportPoint that this method does not take into account gui inset. For some reason, when it comes to calling this method on viewport cameras, the gui inset needs to be taken into account always, meaning if it is ignored via IgnoredGuiInset set to false, you have to add it back in, as seen in the block of code above.
API documentation really needs to get updated 