Okay, this was a weird bug.
I worked on some Engine optimizations a while ago which made certain jobs run in order correctly. For example, packet processing happens before physics and so on and so forth. A few weeks ago we turned on this optimization for the servers. Before this event, server jobs kind of had a polling system where the jobs pointed out how much they wanted to run based on the expected frequency/workload they had, but there was no set order of operation on the kind of work the server did.
The way that parts get deleted from Workspace is that if someone detects a part that has fallen, they set their ownership to the server. Server is then responsible for deleting these parts during a physics processing job. Apparently, if the job that decides network owner runs before the physics job it says “Wait, why are you owned by me, that client should own it”. And this starts an endless cycle of Client setting owner to server, and server setting owner to client before being able to delete it.
The reason this didn’t happen before my optimization was because jobs didn’t have a preset order to them so if you repeated the loop of client to server, server to client enough times you would eventually land at the right time where the packet arrives right before the server does a physics job and finally deletes the part, ending the vicious cycle.
Lesson Learned: Make your code more predictable.
Fix should be out by next Thursday. In the mean time, a work-around for people experiencing this issue is → setNetworkOwner(nil) on any parts that fall below the “kill height”.
BONUS ROUND: This is only reproducible if you have a high enough latency. Are ya’ll not from North America?