This might be intended behavior, but I don’t think it should be. At least for beams.
I’m trying to make animated sails on a boat and they’re transparent, unless there is no fog.
Also having cloaking enemies with fog is a no-go too.
I can only speak for the behavior of glass.
This is a limitation of our current rendering system. We currently apply fog every time we draw something to the screen, instead of drawing everything to the screen without fog then applying fog as a depth-based post process. Because of this, when we draw glass, through the refraction you see the color of the red part affected by all of the fog between the red part and the camera plus the surface of the glass affected all of the fog between the glass and the camera. The fog between the glass and the camera gets applied twice, basically.
I only know one way around this: when performing refraction, use the depth value of what we’re refracting and the known fog color to “undo” the extra fog that has been applied to the red part. However, this comes at a performance cost and only works on high quality glass with refraction, so I don’t think it should be shipped until we figure out a solution that works for all quality levels. I wouldn’t expect this to happen in the near future.