How long of a computation will start to impact performance?

I have a server sided function thats pretty heavy and it runs 5x a second, I calculated how long it takes but I’m not sure how much of a impact this will actually have.

On average it takes about .0005 seconds, it can be much higher or much lower but this is a fair average, it runs 5 times a second so about 0.0025 total time is lost doing this a second, how much will this strain my server resources? Or does it depend on how the function is coded (I loop through massive models and Get:Attribute() on each part)

Have you tried using script analysis to see how much activity the script is using I think that should give you a rough idea of performance.

script analysis is a tool used for syntax if you mean script performance its about .2% and 1/s (I dont know that actual importance of these numbers, all I know is higher is bad )

What exactly are you trying to accomplish? Do you really need to loop through the entire model?

Yes I do, im trying to get a total value of an assembly

I don’t think im qualified enough to answer this but I don’t think lua is made for computational heavy stuff but you can maybe try to optimize it.Also can you show example code?

local total = 0
for i,v in pairs(model:GetChildren()) do -- model has up to 10,000 parts
   total += v:GetAttribute("Worth")
end

Couldnt you just give the attribute to the model and just check if the part parent is under the model with the attribute?

i dont get what you mean by this, i need the sum of every attribute every part belonging to a model has

Ohh, nevermind then. I assumed you just wanted an overall attribute.

Why do you need to do it x5 a second? What is the exact purpose of doing it?

How is worth calculated? Why you need to loop through the model 5 times a second? If worth doesn’t change than you’ll be wasting resources.

You could just attach an AttributeChangedEvent to the parts you’re summing up so that when any of them change, you can quickly calculate the new value.

1 Like

there seems to be a lot of confusion around this:

the attributes or the number of items changes quicker than .25x a second

worth does change thats the entire reason I am doing this

To accurately get the value of a constantly changing mass of parts

I am not looking to redesign this entire system I am simply trying to figure out the performance impact this method has on servers, if its poor then yes I will try to redesign it, if its not then I will leave it.

In real game engines we’d create a gpgpu program with kernels (in this context a kernel is like a tiny program which runs on parrallel) but compute shaders dont exist in roblox :pensive: as long as you arent calling every single second it I dont see a problem, really.

This sounds more worrying than actually calculating the whole assembly mass.

Anyways, what I would do is stress test the system. Spawn like 10, 100, 1000 assemblies and see how the server performs. How many assemblies can it handle? If it handles more than you need then that’s great, otherwise you might need to drop recalculation to 4, 3 or 2 times per second.

i dont know how to tell when something is handling too much, is .0005 computation too long? is .4% script usage too much?

No it’s not. If it’s only that much forever, then it’s cool. Are you going to use more than 1 assemblies?

1 Like

nope that was pretty much the max