I’ve recently did an overhaul of code and I’m a little stumped as to why my newfound parallel code tends to spike cpu and lag in testing environments. I’m definitely going to rescind to serial refactorization but definitely want to know if im visibly doing something wrong, in general my parallel code handles 3 types doing 2 batches for each type (indexnpc->targ and viceversa these are the batches). I’ve significantly mitigated unnecessary iterations as much as possible–only using recentlychanged models, checking necessary dictionary serials, delaycooldowns based on physics calcs etc. the code for those 3 types tend to be almost identical to the following
rn i have 32 actors, and i have sequences to ensure fair actor utilization based on worker number associated with iterationinfo
Revis = function(ChangedModels, IterationInfo)
for ChangedModel, _ in pairs (ChangedModels) do
--fetches self, constant time
local ChangedModelsNPCSelf = NPCTypesHandler.FetchTargetNPCSelf(ChangedModel)
if not GlobalCurrentTargets[ChangedModel] then
continue
end
for OpposingNPC, _ in pairs (GlobalCurrentTargets[ChangedModel].SeeingTargets) do
---
local VisValue
if IterationInfo.Batch=="1" then
VisValue=GlobalCurrentTargets[ChangedModel].Visible[OpposingNPC]
elseif IterationInfo.Batch=="2" then
VisValue = GlobalCurrentTargets[OpposingNPC].Visible[ChangedModel]
if VisValue==nil then
--the opposite person is not invisible to changed model.
continue
end
end
--is value is true, its on a delay cooldown
if VisValue==true then
continue
end
local OpposingNPCSelf = NPCTypesHandler.FetchTargetNPCSelf(OpposingNPC)
local IndexNPC
local PlayersNPCSelf
local PlayerIndex
local TargetNPC
local TargetsNPCSelf
if IterationInfo.Batch=="1" then
IndexNPC=ChangedModel
PlayersNPCSelf=ChangedModelsNPCSelf
PlayerIndex= ChangedModelsNPCSelf.Player
TargetNPC=OpposingNPC
TargetsNPCSelf=OpposingNPCSelf
elseif IterationInfo.Batch=="2" then
TargetNPC=ChangedModel
TargetsNPCSelf=ChangedModelsNPCSelf
IndexNPC=OpposingNPC
PlayersNPCSelf=OpposingNPCSelf
PlayerIndex= OpposingNPCSelf.Player
end
--we first need to ensure that this model comparison set is valid for initializing.
if ValidateParticipation[IterationInfo.Title](IndexNPC, PlayerIndex, OpposingNPC, IterationInfo) == false then
continue
end
if AllySeen(PlayerIndex, IndexNPC, TargetNPC) == true then
--remove from global this indexnpc.
--done so that when its false due to reassessment we can remove from future considerations.
--the ally in particular will reignite .invisible participants when revis fails for them.
GlobalCurrentTargets[TargetNPC].Visible[IndexNPC]=nil
GlobalCurrentTargets[TargetNPC].Invisible[IndexNPC]=false
continue
end
--========================end of allyseen validation
--add to messages
Messages[DetermineCallbackTitle(IterationInfo)]+=1
local IsWithinCQCMagnitude, IsWithinMagnitude
--if targ npc doesnt have it we dont care. the sibling thread should be handling it or within self.
if GlobalCurrentTargets[TargetNPC] then
IsWithinMagnitude = GlobalCurrentTargets[TargetNPC].WithinMagnitude[IndexNPC]
IsWithinCQCMagnitude = GlobalCurrentTargets[TargetNPC].WithinCQCMagnitude[IndexNPC]
end
--message actor based on assigned number
GetActorForRequester( ActorAssignment[DetermineCallbackTitle(IterationInfo)] ):SendMessage("CombatControllerUniversalAssessment",
actor,
IterationInfo,
PlayersNPCSelf,
TargetsNPCSelf,
raycastparams,
{
IsWithinCQCMagnitude = IsWithinCQCMagnitude,
IsWithinMagnitude = IsWithinMagnitude
}
)
end
end
--print( Messages[DetermineCallbackTitle(IterationInfo)] .. " messages for " .. IterationInfo.Title .. " : " .. IterationInfo.Batch)
if Messages[DetermineCallbackTitle(IterationInfo)]==0 then
task.synchronize()
ThreadCompletionBools[DetermineCallbackTitle(IterationInfo)].Value = true
end
end,
i dont see anything wrong with it. also for context a task can consist of magnitude checks, dotproducts, raycast though not always all i.e. fail 1st mag check then thats the end
i just think im somehow incorporating it wrong but i just dont know enough about parallel luau to say for sure, will be working on removing actor utilization very soon so i can negate the lag altogether…
the receiving messager merely reads the info that the worker had done and alters certain tables based on it.
EDIT
ive finished refactoring back to a serial context and the time for processing has been significantly optimized (compared to code that had everything assessed in one go (nested loops galore)) to mere microseconds at a generally high test sample–so despite not being able to utilize actors and complete parallelism, it was still a worthwhile attempt in the end