Beams are optimized based on the current user’s graphics level setting. This is fine, however the optimization is wrong or incorrectly implemented. Beams with low segment sizes become ‘invisible’ at low graphics levels.
Based on my observations, I believe that the current optimization for beams does not check the Segments
property value before lowering the segment count. I also believe that the segment reduction is a division by an integer set by the graphics level.
At level graphics level 1 (minimum graphics level), it seems that this divisor is 10. So a beam with 20 segments gets reduced to 2 segments, a beam with 50 segments gets reduced to 5 and so-forth.
This results in developers having to create significantly higher-poly beam effects so that they can be assured that they will render at lower graphics levels.
Here are some screenshots showing the Studio quality level and the beam’s wireframe rendering. The white beam is 20 segments. The red beam is 50 segments. The blue beam is 10 segments.
Max Quality
Min Quality
Notice how the blue beam does not render at all in minimum quality.
A beam which only necessitates 4 tris should not be forced to 40 tris.
Expected behavior
At the very least, I expect that Beams’ Segments
property should be checked before the optimization is applied, and only apply the optimization if it will not completely de-render the Beam. This means not dropping the Segments
count to less than 2.
Ideally I expect a much better optimization implementation than simply dividing Segments
by a static divisor, but I understand this is more time consuming.