Pretty sure we can’t go THAT far with the current API, though?
You could say “put your head to the site and close an eye” for fun.
The “close an eye” wouldn’t matter at all, but they wouldn’t know
Well, so if you aim through the iron sights you can’t see along the straight line with both eyes - stereo fusion does not work that way. What you have to do is to look with one eye through the sights so that you see the target you’re hitting. In general if you don’t close your other eye it’ll be very hard to focus in an appropriate way.
I had several people who have shooting experience try PF in VR and all of them instinctively did the right thing
Although people also train themselves to keep both eyes open, at least in photography. The eye looking through the viewfinder is dominant and frames the picture, adjusts the focus and the light balance etc. while the other eye keeps track of the action, so adjustments can be made if the situation changes outside the current frame.
Particularly useful in sports photography.
This probably isn’t as important in shooting since sighting down a barrel doesn’t limit your peripheral vision as much as a camera typically does (unless you’re using a scope, probably one reason snipers have spotters).
I want someone to try this place, or a similar place, with VR while standing on a plank of wood with a fan blowing you from your right side … and maybe with your hair a little damp as well
The problem is that you can get really sick really easily. If you don’t have a VR headset to just test as you go, it’s very difficult to write good VR controls. Not saying your implementation is bad!
It helps to think of the headset not as a controller, but as a sensor. You need to try to reproduce head movement in 3D as closely as you possibly can. Smoothing isn’t necessary and any scaling or non-linear representation will make you sick.
I think my script just makes your head control the camera 100% (rotation-wise).
It would probably be better that you still control it with your mouse, but moving your head sets an offset. (A bit like the “free look camera” in Arma games, if you know what I mean)
I’m sure there’s a way to stream your screen on your phone to your PC. Android Lollipop (KitKat?) added native screen recording support, so it should be possible to find an app that does this.
Wouldn’t we want the other way around so we have use the power of oh say a dedicated NVIDIA card and quad core Intel CPU rather than an ARM CPU with a few shader cores?
But I already have a $850 laptop with a $260 SSD upgrade, and an LG G3 phone, so I would rather spend $20 on Google Cardboard rather than $600 on an Oculus Rift.
For GUIs, you should have the ability to set any element’s distance as a floating point number, from 0 to inf. You know, so you can do cool dynamic, 3D menus and effects.