Hello all, I’m making a segment intersector module that is able to handle finding intersections between segments in O(nlogn) time. I achieved this by using the Bentley-Ottmann algorithm to find segment intersections, and I also use a spatial QuadTree that contains segments within its leaf nodes so I can partition the intersection calculations into multiple regions for parallel processing.
Now, I tried implementing the parallel processing part, but I found out that sending signals from serial to parallel takes a significant amount of time. Thats even without sending any arguments through a Bindable Event.
You can see in this image that the normal serial time to calculate intersections is 0.015 seconds. Sending a signal from serial to parallel takes around 0.014 seconds max (already the same amount of time it took to find all the intersections in serial). The actual parallel execution of finding intersections takes half the time it took for serial, around 0.006 seconds max (performance boost I was looking for), but then sending a signal from parallel back to serial takes 0.39 seconds max, which completely destroys the purpose of using parallel processing to speed up the calculations.
I know some may be thinking, “well, is it enough segments to even be useful?” Maybe the overhead of sending a small amount of segments defeats the purpose for using parallel processing to speed up the process. However, this happens even when I compute 2000 or 3000 segment intersections. The amount of overhead it takes to send a signal to parallel processing VMs and back takes a very significant amount of time.
This is the image of what’s actually happening:
Does anyone know why this could be happening?