On the Matter of Lag Compensation & RayCasting

A little while ago I made this post asking for help with my inaccurate server-sided RayCasting. The solution became too complex - so just like I did with my online classes, I gave up. Until today.

Introduction

The question gets asked quite a bit on here, and I didn’t notice it until I went searching for my old thread. For those unaware of the issue, providing secure yet accurate RayCasting for a weapon is a paradoxical task. You can either sacrifice ray accuracy for security, or sacrifice your security for ray accuracy.

gun
Shown above is an example of ray inaccuracy. As you can clearly see, when the user is moving while firing the gun the rays do not appear to originate from the weapon. This is because the server does not receive the client’s information in time before creating the ray. By the time the server creates the ray, the client has already moved into another position. The above gif can be reproduced by creating a gun whose rays are created strictly on the server.

This is the most secure way of handling rays as the client simply sends information which the server can verify. By handling rays on the server, it also makes it much easier to deal damage to other players without risking exploitation.

Lag Compensation

Lag compensation is a unique approach to dealing with this issue. Sticking with the example of RayCasting and going off of Valve’s definition of lag compensation, it would work by storing all of the user’s positional data for a brief period of time. Then, rays would be compared with that past positional data to determine if a hit was made.
image
The gray cube in this very exaggerated example shows what the server would store as the players positional data for a brief period of time because that’s where the server sees that the player is. Rays that intersect with the cube would count as a hit. Hopefully this explanation is accurate, and if it isn’t please let me know.

Fortunately, ROBLOX is pretty swift with updating the server about positional data of the player. As you can see in this video, the difference is barely noticeable (client is on the left, server is on the right). And in our case of Raycasting, the issue isn’t about dealing with targets “that aren’t where the server thinks they are”, its about solving the weird aesthetic issue of the beam coming form nowhere.

To solve this using similar a solution would require predictions. Instead of storing data about where the player was, we would have to predict where the player is going and draw the ray based off of that prediction. With that, we have some good and bad news. The good news is that it’s probably more than possible to make those predictions. The bad news is that I don’t have enough energy to actually try to do that.

Instead, I present to you my grand and totally unique theory.

Hypothesis

Drawing one ray each on the client and the server will result in an endpoint that is just as accurate as a single ray on the server.

Research

I have made a weapon that shoots two rays. One ray is rendered on the client and one ray is rendered on the server. The differences can be clearly seen.

image
The red beam is rendered on the client and the blue beam is rendered on the server. Just like in my jumping example, there is a noticeable difference in the render times.

soup

At first I thought this would be the one true solution! The endpoints appeared to be extremely accurate and the client beam was also accurate. However, after jumping around and shooting my gun in excitement, I was met with a soul crushing result.

image

The rays were not accurate 100% of the time. My day was ruined, and my disappointment was immeasurable. But instead of completely giving up, I thought I would write up this report and share with you my results.

I setup a small range and did 50 “jump shots”. Shots where the endpoints were within a stud of each other are considered accurate and shots where the endpoints are further than a stud apart are considered inaccurate. Out of these 50 shots, 6 were inaccurate.

I also did 50 regular shots where I did not jump at all, and only moved horizontally. All 50 shots were completely accurate.

Conclusion :cry:

I was unable to prove my hypothesis with 100% certainty. If anyone wishes to contribute their arcane knowledge of this issue I would greatly appreciate it.

All the best,
he_ro

12 Likes

What you could probably try doing is a very hacky method of grabbing the player’s moving direction/velocity, put the effect’s starting position inbetween the client’s starting position guesses, velocity guesses of direction, and server sided direction of guessing, then slap it inbetween or a little more forward.

Basically, you’re using 3 details of info to grab where the player is moving/at during the time then using that to create a mid point for your starting point of the ray.

May or may not work?

Or if you want the server to handle it completely, just grab the move direction and server sided raycast starting calculation and use those two.

1 Like

This would work in some instances. However, there are also a lot of unique circumstances that are very hard to predict. Like what if the player begins falling off of a building? Or if they, mid-air, decide to move in the other direction?

You can grab the player’s velocity and accommodate for those scenarios. And if you really needed to, you can offset based on the velocity using some offsets or if you want to get complicated, tricky maths.

However, I believe as long as the effect is being done on the client (which it should be) and server’s ending pos with client’s ending pos is the same, then it should be all fine and dandy.

Server handles hit detection, client handles starting pos of effect, unless there is something I’m missing in this post that requires you to have near pin-perfect accuracy.

You’re right, it’s entirely possible. But for small shooter games, it’s not very practical. And that’s kind of my whole point, there is no one easy solution.

Have you tested this? Sure, you can see a difference when you see the ray, but will players actually feel a difference? All FPS games have lag and most try to compensate. You shouldn’t be expecting perfection.

If I’m not mistaken, phantom forces just does visual effects on client meaning it looks no different, and hit detection are done on the server detecting where the player has hit the other player. And it should be accurate enough, little to no delay and still nice looking effects.

As @MmadProgrammer has linked, visual effects are being done on the client while server sided effects like hit detections are of course, being done on the server.

1 Like

This topic is somewhat old but it wasn’t worth creating a new topic;

What if when you fire your gun, you raycast on the client, send that information over to the server, and the server raycasts as well. This way, you have somewhat uncertain data (the client raycast information) and then you also have certain data (the server that performs its own raycast).

You’d compare the data of the red and blue line in your videos and if the red beam (client) is close enough to the blue beam (server), it’ll choose the red beam over the blue one. I’ll let anyone who reads this to decide how they should go about exploit-proofing this. (My guess is creating a ‘believable’ limit on how far apart the data can be.)

Essentially with this solution, you can trust the client if it’s believable. I think it’s hell of a lot easier and more efficient than trying to guess where the client was trying to aim with velocity/direction.

4 Likes

The problem with this one is that if there’s nothing behind the enemy, the ray will go into the void and the distance will be infinite so the damage wont be dealt. Maybe make the ray as long as the client ray was? this is the only solution i can think of.