Hello guys, this is going to be an extremely lengthy post, but I’ll take any of your inputs. I’m indecisive as to whether I should use proximity prompts or my custom interaction handler in a first-person game with both carriable and interactable objects. I tried both options so far and cannot determine which is better.
Proximity prompt
Pros
Much more seamless implementation
More resourceful memory and CPU utilization
Cons
Barely any modularity or customization for detection/selection behavior of interactables
Extremely finnicky behavior when an object has two proximity prompts (i.e. one for its interactive behavior and another for carrying it) because PromptShown fires twice
Proximity prompts are set to custom visuals and instead display a highlight for the entire model when PromptShown fires. For interactable objects that are also carriable, this fires twice because two prompts exist for the same object (one for interaction and one for carrying). This essentially means that the script has to check if a highlight already exists for that object
Custom interaction mechanic
Pros
Very customizable in terms of how the game should handle selection of an interactable object
I can make selection much stricter (e.g. the object must be much closer to the cursor or center of screen to register)
The interactable is able to register once instead of twice because the carry and interaction actions now listen for specific keystrokes as opposed to prompt behavior
Cons
Very messy. This isn’t much of a con because it doesn’t directly affect gameplay, but the implementation is very janky and difficult to read compared to proximity prompts (even though I wrote all of it, maybe I just need to write it more concisely)
Intensive on memory and CPU. My method performs a ton of magnitude calculations and raycasts per frame to determine which interactable objects are suitable for selection. I won’t bore you with a wall of code and instead briefly explain the underlying mechanism:
For every frame:
For each interactable object that exists:
Calculate the distance between the player’s head and the object’s origin using DistanceFromCharacter
If the distance exceeds the given range of the interactable behavior, skip to the next object
Perform a raycast from the player’s head to the object. This raycast excludes the player’s character and the destination object.
If there is a non-nil result, skip to the next object because that implies an obstruction.
Calculate the viewport location of the object and determine whether the object is on the screen with WorldToViewportPoint
If the object is determined to be off-screen (the second value in the tuple returned by the method), skip to the next object
Calculate the distance between the ViewportPoint vector and the center of the screen (which is where the cursor should always be since the game is first-person)
If the distance is greater than a predetermined value, skip to the next object
Add the object to a cache that stores its on-screen distance
After the iteration through all objects, determine the final selection by comparing on-screen distances from the center of the screen.
Create a new highlight or fetch the respective existing highlight from cache for the new selection if it already isn’t currently selected.
I am very open to possible micro-optimizations for my custom mechanism if you have any. Thank you!
Nice, concisely structured post you wrote. Already listed the benefits and disadvantages of each. Very well. I understand your dillema, especially because I had to confront it before.
Proximity prompts really are seamlessly implemented, and because of engine’s internal process, it’s borderline impossible to write an equally efficient system. I’ll take modularity and customization as the main obstacles, because checking whether hightlight exists already doesn’t seem like something worth stressing over. You could also store a variable indicating whether object is being interacted with, so that the second PromptShown function simply ignores the event.
Nontheless, your use case indicates the possible need for something custom.
The most heavy on performance are the first 8 points. Every frame, magnitude check, short to medium length raycast, WorldToViewportPoint, for every object.
I believe proximity prompts could in fact replace the first 8 points.
MaxActivationDistance → optional
Style → Custom
RequiresLineOfSight → true
PromptShown and PromptHidden do still fire
One proximity prompt per object
WorldToViewportPoint upon PromptShown and calculation of magnitude from screen center sound reasonable. Since the game is in first person, an alternative would be taking camera’s CFrame and comparing it to the object’s CFrame. Some trigonometry there, probably not better or worse at all.
That’s it, now custom UIs are shown. Input is handled elsewhere.
Modularity? All this can be done with two connections: ProximityPromptService.PromptShown and ProximityPromptService.PromptHidden.
Personally, I’d work off of ProximityPrompts to spare yourself from having to re-invent the wheel and just handle any points of disruption as they come.
One thing I did want to mention though was this part.
If you do end up opting for a custom approach, you can really boost your performance by not checking every frame because… do you really need that level of precision? Would the client really notice if you checked every other frame, every third frame, fourth frame, etc.? Up to the point where they might, consider that even skipping half the frames means you are doing 50% less calculations throughout the player’s time in game compared to had you checked every frame. Just a point I wanted to raise since that might affect your thinking on the implementation side/performance considerations of a custom approach.
instead of doing magnitude checks you could add each of your custom prompts in to a spatial hashing grid, then check which chunks near the player have prompts in them, which is O(1) time for finding prompts where n is the number of prompts
also you can check if a prompt is on the screen much faster than WorldToViewportPoint by calculating the angle difference between the look vector of the camera and the vector going from the camera to the prompt (for the x and y axis respectively, not for all axes in 1 dot product) then you can use the fov of the screen (different for the x and y axis) to see if it is on screen or not. same concept can be used to find the distance to the center of the screen.
also its always good to take a look at the microprofiler to find any bottlenecks in your code
To save my words and your time to read a lengthy explanation, these are my two cents.
If you stress about simplicity and quick ways to approach your goal, just use ProximityPrompts.
If you care about wider control over your interaction mechanics, you can create your own system, or even better, just use pre-existing open-sourced modules:
I’d just like to expand my suggestion, which is actually very similar to the modules @ItzMeZeus_IGotHacked attached. In essence, all those have ProximityPrompt.Style set to custom while retaining the default functionality of proximity prompts.
I threw together a quick example of what I meant. In the code snippet I’m focusing on the selection and highlighting of the objects instead of user input.
local ProxPromptService = game:GetService("ProximityPromptService")
local camera = workspace.CurrentCamera
local currentPrompt = nil
ProxPromptService.PromptShown:Connect(function(prompt, inputType)
local object = prompt.Parent
currentPrompt = prompt
local is_displayed = false
local maxOffset = math.min(camera.ViewportSize.X, camera.ViewportSize.Y)
local screenPoint: Vector2 | Vector3
local fromMidVector: Vector2
while currentPrompt == prompt do
screenPoint = camera:WorldToViewportPoint(object.Position)
fromMidVector = (Vector2.new(screenPoint.X, screenPoint.Y) - camera.ViewportSize/2)
if fromMidVector.Magnitude/maxOffset < .3 then
if not is_displayed then
is_displayed = true
object.Highlight.Enabled = true
end
else
if is_displayed then
is_displayed = false
object.Highlight.Enabled = false
end
end
task.wait(.1)
end
is_displayed = false
object.Highlight.Enabled = false
end)
ProxPromptService.PromptHidden:Connect(function(prompt, inputType)
task.defer(function()
if currentPrompt == prompt then
currentPrompt = nil
end
end)
end)
I’ve also taken @https_KingPie’s suggestion to recude checks into consideration. As it’s also been suggested, you can replace :WorldToViewPoint() with camera look vector checks, however, the performance difference shouldn’t be significant.
Here’s the result (the highlight means prompt is active). The next point would be to display the custom prompt UI and accept user input once is_displayed is truthy.
Please excuse the poor quality, I had to compress the video.
For small spaces, another option is to simply utilize mouse.Target and only display when the object is hovered. Should you prefer not to use the legacy mouse module, there’s always ScreenPointToRay(). If I remember correctly, mouse module does exactly that.
These are all amazing responses! From what I’ve read, the way in which I select interactables is simply too crude in comparison to proximity prompts or advanced location methods that utilize compartmentalization and/or hashing. I quickly skimmed some information about spatial hashing and read that most implementations are not as practical for small places or unequal spacing between objects, so it probably won’t be for me. It seems like my best option here would be to embrace proximity prompts.
I’m reading through this and absolutely love the implementation. For some reason, it didn’t occur to me that I could just implement some additional selection logic on PromptShown and allow PromptTriggered to perform an action based on whether the additional logic succeeded or not. Many thanks for the amazing presentation! I’ll definitely give your method a go.
And real quick, what the heck is this? At first I thought it was a hypothetical notation for ease of comprehension, but out of curiosity I put it in the editor, and the intellisense doesn’t throw an error. I have embarrassingly never seen this before, and I’m not sure what to slap onto the Google search bar to figure it out. As far as I can tell, it seems similar to type declaration in C-based languages, and | seems to act as a tuple.
No problem, glad I cloud help! I simply added something I made in one of my projects and it turned out pretty well on my end.
This heck is type notation in luau. Roblox expanded their lua sandbox with type checking. For instance, remember when you add a line that calls a built-in function like RunService.Heartbeat:Connect(...), and the editor shows you which arguments to send and which ones the function returns?
There’s more to type checking, but it mostly helps us know the data types, especially what the function somebody wrote or you wrote months ago returns, and what the proper arguments are.
Example
local function DoSomethingForSomeReason(num: number, str: string): number
return num + #str
end
local result = DoSomethingForSomeReason(5, "a string")
The above would tell us that the called function expects a number for the first argument and a string for the second, as well as that it returns a number.
If strict mode (--!strict) is enabled, any unmatching types are going to be underlined red.
It’s a habit of mine to annotate types of yet undefined variables and public functions. Reminds me that fromMidVector is going to be Vector2, and screenPoint can be either two- or three-dimensional vector.
Wow, I’m surprised I haven’t seen this at all before. Its arrival seems to date back 2+ years. I found a post that covers it and will definitely keep it in my radar. Thank you once again!