Hello. I have some random data that I would like to get the average of, I can do it like this:
local data = {}
local rand = Random.new()
for i = 1, 100 do
data[i] = rand:NextNumber(0, 1)
end
local average = 0
for i = 1, 100 do
average = average + data[i]
end
average = average / 100
print(average)
The problem however, is that the average is supposed to be equal to 0.5, but when I print this, it is very close to 0.5, but never equal to 0.5, and the more samples of data that I use to calculate the average of this data, it approaches 0.5, but is never equal to 0.5. Another problem is that the random data that I am trying to get the average of is not something that the script generates, but it is something that is observed by the script (Ie, the position of the mouse on the screen, or time between a button on the keyboard is pressed). So the solution to this problem isn’t as simple as taking the boundaries of the random number generator. So I can’t just do local average = (1 + 0) / 2 -- Would be 0.5.
I have no idea if this would belong in scripting support, because this is more of a math question, but there have been manyothermathquestions in scripting support before, and they haven’t been taken down, and I don’t know what other category this topic would belong in. Plus I am still scripting something that needs this, so technically this does belong in scripting support.
:NextNumber() will return a floating point number of I believe 17 decimal places. You are trying to get a floating point number of 1 decimal place, therefore, you will have to round your average or the result of :NextNumber() to 1 decimal place.
Well, rounding up your average to one decimal point will still work either way. I should also mention that computers themselves cannot make an accurate floating point number, so it’s not necessarily a math issue but rather a computational limitation. There are plenty of videos out there explaining why that is, but for ease, I’ve provided you with a sample one below.
Even Roblox themselves admit to this. Having a look at the documentation for floating point numbers, it mentions that floating point numbers “[aren’t] as precise as double-precision floating-point numbers, but is sufficiently precise for most use cases and requires less memory and storage.”
I don’t see why it is extremely crucial for your system to make an accurate 0.5 floating point number, but if your math for averages is theoretically correct, I don’t see why you shouldn’t just round it up.
From a statistics prespective, this is what should be expected: 0.5 is only the expected average.
For example, with infinite precision, there is an infinitely small chance of getting 0.5. Rather, the distribution of numbers would be a continuous distribution of an infinite set of numbers centered around 0.5.
Consider this: if you have 100 people flip coins, the expected average is 50 heads and 50 tails. This doesn’t mean you should expect to see 50 heads every time though. In reality, you’d actually see lots of numbers other than 50 (though they would usually still be pretty close to 50).
TL;DR: You shouldn’t at all expect to get exactly 0.5 every time, instead, you should expect to get about 0.5 most times.
This is exactly what I am looking for. How would I calculate the expected average instead of the “real average”?
Edit: I think I found what I am looking for, but if anyone else has anything else to contribute here, or if what Ifound is not what I was looking for, then please reply.
Calculating the expected average requires knowing the data itself. We cannot help you further now because you have not specified what data you are getting.
As @.D4_rrk said, the best you can do is to round the real average.