So, I have two set of Trees, and I want to know which one is better.
Currently idk what to use to see which one is better in performance and which one takes less memory.
I have tried:
- I used microphiler, but it wasn’t very helpful cuz it just increases even in a baseplate with nothing. Sometimes I got even the same results for each set of trees.
- I used the deprecated Property: DataCost idk how does this one works, or if it even works properly.
– First Set Of Trees: 5808 – DataCost
– Second Set Of Trees: 4500 – DataCost
- Developer Console, (Instance) memory usage is not that precise.
What should I do?
Unless you’re planning to cram a whole ton of high detail trees all around your map I don’t think you have anything to worry about.
If you’re really worried about which one is better for performance, choose the set of trees that has less parts. If it’s a tree with meshes, choose the one with less polygons.
First Set of Trees, which has CollisionFidelity set to default, CanCollide = true, SmoothingAngles = 0, RenderFidelity set to Precise.
Second Set of Trees are the same as the first one just that, CollisionFidelity is set to box, CanCollide = false, SmoothingAngles = 100, RenderFidelity set to Automatic.
EDIT: I have a huge map, it needs to be full of trees.
It sounds like the first set of trees is higher detailed than the second set. If you are planning on having a huge map then you should probably stick to the second set.
I think you’re confused over what
DataCost did. It was a property used to figure out how many units of ‘data’ it would use in the old Data Persistance system. This does not represent how much memory it is actually using.