I’ve managed to parallel parts of my code, but for some reason even though the tasks are parallized, they take longer than they would in serial so in the end its literally no performance gain, maybe even at performance lost.
I parallize it and even though it’s parallized the tasks are taking much longer than they would in serial. Each parallel task in that image has the exact same code as the one with the label “DB” in the first image.
Do you have any reason why this might be? I’m using a big shared table and indexing a lot. Might it be the VM is slower when in parallel?
Try profiling more specific parts of the code to see where the bottleneck is coming from. It usually comes from data bandwidth, but could be something else.
Generally you want to minimize data transaction in multithreading, so you should find a different way of approach if your current solution involve a lot of data moving around.
So I’ve successfully parallized my code. I found out the situation was that I was using shared tables, and updating the value had to update it among other actors’ vms as well. I didn’t need to use this. I ditched shared tables.
However, I still have a scalability issues with player count. The more players the more I have to “prepare” the data to be sent to an actor’s vm by SendMessage. Currently trying to negotiate this issue but I’m having a hard time. This is ultimately my bottleneck.
Are you writing to SharedTable during parallel execution? You should only do it in the serial phase. Temporarily store what needs to be written in a local table and then after resynchronizing the thread you apply the changes to the SharedTable.
Is it one event per thread per frame? You should also consider using some kind of data compression to hopefully lower the overall bandwidth (and thus bottleneck). For example, encode all your data into a string and use LZW.