could another argument be added to touch events that gives us the point of collision between two parts? i dont think i’ve used touch events in anything for over a year and a half, but this would definitely get me to use them more
There can be multiple points of collision… what would you do in this case?
fire Touched multiple times
Wouldn’t that cause issues if perfectly flat surfaces touched with each other?
Assume we have two parts, A and B. The edge of A touches a surface of B. B’s touched event fires for every point on A that is touchin… STACK. There’s an infinite number of points on that edge.
Not to mention this changes the functionality of the touched event which is not a good idea.
Roblox only knows the concept of point contacts. Edge and surface contacts aren’t a thing.
I’m not saying this is a good idea, but that’s how it would be implemented
I’m with you here. I think this is very much possible, and even when two surfaces collide head-on, it isn’t super likely to cause a problem (torques and such have to be computed assuming the collision happens at one point).
As for the idea, I don’t have any projects at a standstill for lack of this feature, so it wouldn’t be high priority for me, but I couldn’t say it’s a bad idea.
This is an alternative Custom Touched Event script I wrote utilising EgoMoose’s Rotated Region3 module. It fires no matter the state of the part (anchored, cframed, etc.) and works pretty well, although grabbing the intersection points isn’t always perfect.
Documentation is at the top, example usage is at the bottom. Most of the credit goes to EgoMoose for the Rotated Region3 functions, an explanation on how they work can be found at http://wiki.roblox.com/index.php?title=User:EgoMoose/Articles/Rotated_region3.
Not just that but if we could have a full vector of the physics collision (as in the strength and direction of the collision) that’d be the best thing since sliced bread.
return a surface normal or array of points?
i talked to ego about implementing this with his region3’s, but what i’d like to see is c side implementation so it’s less costly on script performance. could also be extremely beneficial to new devs without the resources to find and use things like ego’s modules (i know if i had something like this around 2013 i would’ve peed myself from excitement)
If I remember correctly, one of the main arguments against this in the past has been efficiency. I believe it takes some extra steps to find the exact details about a collision and for any use case before this argument is added it would just be an unnecessary slowdown. There is also more information that you might want to request such as surface normals of contact points so this isn’t a very forward compatible solution. A better one might be an object like “CollisionInfo” being passed as a second argument instead which has methods for retrieving individual pieces of info. That way information is calculated as needed and the API doesn’t get bloated when someone decides more information is needed.
i like the collisioninfo idea!! i’ve seen in a couple of demos, maybe around 2013 when articulated physics were being added (before it was scrapped) where they were able to render collision points in some kind of debugging mode … ? (which is why i think its fairly easily implementable but i could be wrong)
do events fire without any listeners/connected functions? maybe we could have the normal touched event and also a TouchedCollisionInfo event or something along those lines, so the collision point calculation would only be done when devs need it?
I’m not too sure exactly how Roblox’s engine architecture is set up internally, but I don’t think adding a second event would work well with it. The physics engine would need to know about how many listeners there are for each TouchedCollisionInfo event which goes against the idea of separating responsibilities of systems and it would cause some API bloat. Adding a CollisionInfo argument to the Touched event callback functions wouldn’t complicate the API by very much and still allows for efficiency by waiting to perform expensive calculations until they are explicitly requested.
Yeah, the debug info could probably be used, or at least the method used to obtain the debug info. It might not be optimized for general purpose use though.
This would be a convenient feature, right now the best way I know to find the collision point is with messy raycasting.
I’d definitely love to see this.
This is kind of necessary for physics based gameplay. Either that, or make Touched more responsive. If I anchor a part in the touched event it should be anchored, not anchored in “oh a second or so”.
unresponsiveness comes from network latency, not any roblox internals
using touched events on any kind of server-side script will always result in some kind of delay, there’s nothing roblox can do to fix that
the only way to circumnavigate the delay is to have local touch events
let me elaborate, when you’re the network owner of a part, the physics for that part are being calculated on your computer rather than on the server. whenever a part moves that you’re a network owner of (this includes your own character), there’s delay from that physics data having to travel over the internet to the server. when a collision happens on the client, the server doesnt detect that collision until a short time after you perceive it happening. and because the touched event is handled on the server, it doesn’t fire until that collision is detected, causing the delay
I found this out by digging through the dev forum for a bit and moved a lot of my projectile hit detection code clientside, but not all developers have access to this forum. I feel like this behaviour should be recorded in the Touched event documentation on the Wiki.