I’m trying to check to see if a part can be directly seen by a player. I thought it would be simple enough but there are a few issues.
Currently I’m trying to loop through the list of objects and I cast a ray from the camera’s position to the object and check if the object hit by the ray is equal to the object I cast the ray to. If it’s not then I know something’s obstructing the view the player has to the object.
The problem with my current approach is it’s too imprecise. If I have an object that’s 70% obscured by another object, the raycast will go from the camera to the object’s position which is the centre point position. 30% of the object is still visible, but because the centre is obscured, it will register it as being not visible to the player.
I’ve looked at WorldToViewportPoint and WorldToScreenPoint, but they both require a position which would present the same problems.
Does anyone have any idea how I could go about fixing this?
I don’t think that would work. It would provide some more coverage, but if the corners of the object are covered, but the faces are visible it would still return incorrect results. I’d need to cast rays and couldn’t rely on WorldToScreenPoint by itself because that method doesn’t take obstruction into account.
I think this function would have the same problem as you need to provide cast points. It’s apparently more optimised but even if I tried to fire rays at 1 stud intervals to increase coverage, that’s going to be a big drain on performance.
Good point, I think we can work around this as there’s not another function we can use given in the camera API documentation from what I see.
I believe you can use the xy coordinates returned by the world to screen point coordinates from the corners to create a 2d shape on the screen
Then if this 2d shape on the screen or not out of the screen it should mean that this object is on screen and seeable.
That’s my idea, perhaps someone else has a more mathematically sound one, perhaps 3d to 2d projection math would work here?
Edit: Hmm, when in doubt look at other game engines, seems like unity had much more methods than world to screen point. I wonder if it’s possible to recreate these unity functions into Roblox ones?
Seems like we need a work around since we can’t access the camera rendering I believe.
I would cast a ray between the camera and the player’s HumanoidRootPart. This way the ray points in the direction the camera is facing. You can detect a hit on this ray and if it is an instance in your table, you can assume this part is blocking your view.
Perhaps a work around would be to use WorldToScreenPoint like you said to map a 2d version of the object, then use a series of raycasting beams from the camera position spread at incrementing angles based on the player’s FOV to determine the Z axis of objects and finally determine which objects don’t fully overlap to calculate if an object is visible.
Seems like it’s a lot complicated, and goes onto topics such as rendering, 3d to 2d projection indeed with frustums. What’s a frustrum? I believe it’s this, not sure not an expert on this topic but it’s really interesting from this informative website of 3d to 2d.
Frustrum should be the canvas which is basically the screen. Above is what I was picturing with creating a 2d object using world to screenpoint to see if an object is on screen, however it doesn’t tell if another object is blocking the view so that’s an additional problem .
Seems like you gotta do a balance between accuracy/resolution and performance as well.
So in that post he’s talking about anything not between the near and far clipping planes being automatically culled. It would be cool if I could do that but for the moment I’m going to try and put everything people have suggested together into a solution. Thanks for your help.