I have a system that I am creating to detect objects in front or behind another object/part. Essentially if it detects a part in front of the object within a certain range, the part is selected. Same thing in the rear.
Making this has created a few problems though. To keep things simple, I will refer to the main object as “object” and the closest part I am searching for as “part”.
The main question is, how can I get all of the parts in a certain area in front of the object?
I have considered this post talking about the Special Query API, however, I haven’t done much with implementation. I have come across several posts like this one with solutions for finding the closest part to an object by looping through all of the parts.
Using some effort and research, I could probably make both of these solutions work together, however, looping through potentially thousands of parts in a specific area might not be the most efficient.
This also brings up a second issue which is, all of the parts position’s are determined by their centers, not closest edge.
To counter this, I thought about using raycasting, which would likely be the most efficient and accurate, however, I haven’t found any resources regarding raycasting in a certain radius for a certain distance (essentially creating an area sensor). This would be the ideal solution as it is probably the most efficient and accurate, however, I am unsure of how to implement it.
Any help would be greatly appreciated. If you need anything else clarified please ask!
This worked great, the only problem is that it checks below and above the object. I only want it to detect parts the object might hit. Any ideas on a way around this? Possibly creating the same thing but with a cylindrical shape instead of a sphere?
Yeah probably that’s the best course of action. I’m unsure if this is super performant or not so you might have to ask someone who knows if this is a good option or not
I have done a little bit more research and found that Blockcasting works pretty well. This has to be done with a sensor part that I have attached to the object.
local cast = workspace:Blockcast(
sensor.CFrame,
sensor.Size,
sensor.CFrame.LookVector * distance,
params
)
if cast and cast.Instance then
print(cast.Instance)
else
-- Nothing detected
end
Wrap that in a RunService loop and it works perfectly! Thank you for your help, as I likely wouldn’t have found Spherecasting, Blockcasting, or Shapecasting without your initial reply!
(Note: The sensor must be facing the direction you want to sense. What Blockcasting does is moves the given sensor’s CFrame and Size along the specified direction. Multiplying the direction by your desired distance is how far the sensor’s CFrame and Size will cast. This is almost exactly what I am looking for, as it allows you to detect parts in a personalized area, meaning a custom height, width, and visual start position)