If you cast a ray and the origin is inside of a part, that part will be completely ignored even if it’s not in the ignore list. Not sure if it’s intended. Super annoying either way. The part isn’t in the ignore list; it should not be ignored.
If you cast a ray and the origin is inside of a part, that part will be completely ignored even if it’s not in the ignore list. Not sure if it’s intended. Super annoying either way. The part isn’t in the ignore list; it should not be ignored.
I like this feature though.
Interesting, but it’s probably surfaces which connect touch events. I cant see a use case for a touch connection within a part.
I can see cases where this would be annoying and useful at the same time.
I think that if the ray origin is inside a part it should be detected, and then you can just add it to the ignore list if you don’t want it to be detected.
[quote] I can see cases where this would be annoying and useful at the same time.
I think that if the ray origin is inside a part it should be detected, and then you can just add it to the ignore list if you don’t want it to be detected. [/quote]
An “IncludeEncompassingParts” bool argument would be nice. This behavior of ignoring encompassing parts is probably the norm by now, so an argument like this that can easily toggle the behavior is probably the best way to go.
From what I understand involving rays…
Like Rip said, they detects surfaces. And (to save a shitton of memory) surfaces only have one side. (Place your camera inside a mesh. You can’t see the back side of faces!) When I used to do some work for a company in GL, I would create a face by specifying 3 points of the triangle. Depending if you went clockwise or counter clockwise, the surface would face a different direction.
So in order to detect collisions “inside” a part you essentially need to have 2x the faces from what I know. If you’re asking for the rays to detect if they are within the region bound by the part then my best guess is that would take up even more processing power. I think the way that they currently do it is correct.
And what do you wish/expect to occur if the ray is within two or more overlapping parts?
The same thing that occurs when a ray hits two overlapping faces. It chooses one.
[quote] From what I understand involving rays…
Like Rip said, they detects surfaces. And (to save a shitton of memory) surfaces only have one side. (Place your camera inside a mesh. You can’t see the back side of faces!) When I used to do some work for a company in GL, I would create a face by specifying 3 points of the triangle. Depending if you went clockwise or counter clockwise, the surface would face a different direction.
So in order to detect collisions “inside” a part you essentially need to have 2x the faces from what I know. If you’re asking for the rays to detect if they are within the region bound by the part then my best guess is that would take up even more processing power. I think the way that they currently do it is correct. [/quote]
Raycasting is not the same as rendering, AFAIK its done (at least in roblox) on the physics side representation of objects and not the graphical one. And even with rendering you dont need 2x the faces to render both sides, you just need to tell the GPU to do so.
Raycasting detecting both entering and exiting parts doesnt seem that useful and probably would be more inefficient…
Detecting the part the ray is fired from within probably wouldnt be too bad though. But I can see it being problematic if the raycasting has only knowledge of surfaces and not volumes.
Yes but the GPU isn’t magic. “telling the GPU to render both sides” doesn’t magically happen. Telling it to render both sides means it takes all faces and reverses the order to render it backwards so it’s basically doing 2x the work.
This is a bad idea, it would break nearly all raycasting scripts already.