VR Primer

VR Conceptual Primer

Table of Contents:


Form Factor Tradeoffs

Wired vs Wireless

Wireless Tether Details

Walkabout vs Swivel Chair

Swivel Chair


Head Tracking Internal vs External

Constellation vs Lighthouse

Free Hand Simulation vs Handheld Controller

VR vs AR

Current AR

AR as a subset of VR

Open Source vs Proprietary

Technical Details

VR Sickness

Refresh Rate

Low Persistence Displays

Field of View

Sensory Mismatch

Simulation Freedom

Minimum Viable Product

Closing Notes: Form Factor


The perfect VR experience might be something like the Matrix or Star Trek’s Holodeck. But we’re not there yet, that’s science fiction. This is about where we are now; science fact.

Right now, VR is a technology that straps LED screens to the user’s forehead to immerse them in a rendered simulation. By providing wide peripheral bi-ocular vision at near native ocular refresh rate, and with perfectly synced head tracking, VR provides not just a subconscious-level suspension of disbelief, but a biological-level of presence simulation not possible in traditional monitor scenarios. Rather than passively forgetting it is not real, it becomes hard to actively convince yourself that it is not real.

There are a lot of grey areas in this new area of technology, and a lot of developers and consumers will be ignorant of some of the realities inherent in the new form factors until they meet them face-to-face; especially across the spectrum of consumer devices being released currently. This document aims to provide some understanding of the capabilities, boundaries, and tradeoffs inherent in this new technology.

Disclaimer: I do not work for any of the hardware or software companies mentioned in this document, nor am I currently developing a VR title, and I am slightly biased towards Oculus over HTC and Sony.

Form Factor Tradeoffs

Wired vs Wireless

The first divide to understand is Wired VR devices versus Wireless VR devices. Basically, is the VR hardware tethered to a computer, or is it essentially a mobile phone attached to your head?

Gear VR is Wireless

Rift and Vive are Wired

Wireless devices are required to not only provide optics (screen and lenses) but power and processing as well. Wired devices have the benefit of offloading power and processing to a computer, but have a cord to deal with.

The pros of the Wireless device are portability and immediacy (pull it out of your backpack and amaze friends at a party), but cons include limited power supply, CPU/GPU heat near user’s head, lower processing power, and restricted head tracking. Currently Oculus Gear VR uses accelerometer for tracking which does alright for looking left to right (swivel), or up and down (tilt), or cocking your head to the side (roll) but does not track moving the position of your head relative to your body (pan). There are some accessories emerging to provide outside-in tracking for GearVR that would provide positional tracking.

The pros of the Wired device are functionally infinite power supply, magnitudes greater processing power, and external devices for improved positional tracking. The greatest Con being the potential of the cord getting in the way and physically tethering you to one location (sharing with your friends requires luring them to your desktop). A minor con would be that the CPU/GPU processors are built into a general purpose compute platform that is far away from the headset resulting in a small overhead of latency due to purely architectural reasons, but this is mostly below the noticeable tolerance of the humans.

Both form factors have to worry about comfort, which includes the weight of the headset on the user’s face, as well as customising the lense configuration for different eyes. Because the Wireless device is already limited by power supply, extended (8 hour) sessions are less of a concern so arguably the Wired devices need to be more comfortable than Wireless.

Some of these cons can be alleviated in time with hardware improvements, while others are inherent in the form factor. Battery life and processing power will be improved for Wireless devices, but they will necessarily lag behind Wired devices. Meanwhile Wired devices will continue to have a cord; due to the necessarily low latency of transmission a wireless tethering approach would (so far) result in noticeable lag leading to an unpleasant user experience.

Wireless Tether Details

For VR to entirely fool the user’s senses, it needs to have a round trip latency of no more than 20ms. 50ms will be noticeably laggy. That’s not “render a frame in 20ms”, it’s the full round trip of neck move -> sensor -> simulation -> render -> display -> photons reach eye. That means we’re transmitting data across the wire twice: from the headset down to the computer (sensor -> simulation) and back up to the headset (render -> display).

Currently Wired devices such as the Rift and Vive are using one HDMI 1.3 and two USB 3.0 cables to send that data.

I can’t find all the specs I need for these formats, but according to wikipedia: “USB2 high-speed (480 Mbit/s) uses transactions within each micro frame (125 µs) where using 1-byte interrupt packet results in a minimal response time of 940 ns. 4-byte interrupt packet results in 984 ns.” USB 3.0 is said to be ten times faster.

Meanwhile, something like Bluetooth “Smart” would have a bandwidth of 0.27 Mbit/s with a minimum latency of 3 ms. While “Classic” gives up to 2.1 Mbit/s but with a latency of ~100ms.

Keep in mind 984 ns (nanoseconds) = 0.000984 ms (milliseconds).

It’s clear that in both terms of bandwidth and latency, wired beats wireless by at least two orders of magnitude (at least 100x), and that even at it’s fastest Bluetooth can’t transmit ANY (much less, enough data) data fast enough to avoid adding noticeable latency.

Something like RFID or NFC, while slightly faster, has even less bandwidth and the effective range drops to centimeters (in the case of NFC).

Most everyone in the VR Industry seem to agree that wireless tethering is not going to be a thing anytime soon.

Without getting into the topic of AR yet, users interested in the Microsoft HoloLens may be interested in considering how it is affected by Wireless device form factor issues.

Walkabout vs Swivel Chair

The next major form factor divide would be the difference between experiences where the user is expected to move about their room as they move in the simulation, or whether they are sitting in a fixed position while they move in the simulation. This applies to both Wired and Wireless devices.

There are two major concerns here; one is Sensory Mismatch, and the other is Simulation Freedom.

Swivel Chair

With the idea of the Swivel Chair approach, the simulation is assumed not to correspond to the physical space of the user. This results in a high propensity for Sensory Mismatch, but a corresponding high level of Simulation Freedom.

If the simulation represents the user as a soccer player, the user will have to deal with the sensory mismatch of thinking that they are running down a field while their legs are still. Meanwhile, because they are sitting in a chair independent of the size of the simulation, they will be able to move the full length of the simulated soccer field even if they are playing from the inside of a broom closet.

As long as the users can stomach the mismatch, we can craft simulations of large scale and function, independent of the intended users’ environments.


Meanwhile, with the Walkabout approach where we have a 1:1 ratio of user motion to simulation motion, we can get rid of a lot of Sensory Mismatch at the cost of Simulation Freedom. By having high quality tracking systems we can move around in a simulated environment that either mimics the user’s actual environment, or a subset of it, in a way that feels natural. Moving conceptually maps moving physically– for the most part.

There are a couple of ways this breaks down. In the case that the simulated environment is a subset of the user’s actual environment (ie: the user is in a 5m x 5m room, but the simulation is only 1m x 1m) it is possible for the user to physically walk outside of the simulated area causing a conceptual mismatch. Alternatively, if the simulation is a super-set of the user’s actual environment where the user is expected to have their mobility limited to the physical space available, the VR simulation typically has to show a wall boundary to alert the user that they are nearing a conceptual mismatch (before they walk into a physical wall). Mischievous cats and coworkers making the physical environment dynamic only adds to the confusion. Wired devices add another layer of difficulty.

Furthermore, the physical space itself is ultimately stationary whereas the simulated space may not be. In the case of the VR theme park called “The Void” in Utah, one of their simulations includes an elevated lift the user gets on. Because the lift itself is stationary but the simulation implies that it is accelerating upwards, they add vibrations and wind in order to help trick the user into thinking the space is more dynamic than it is.

Custom physical elements to support specific simulations are not a good general purpose solution. In the general case, VR developers will have to write simulations that somehow adapt to the unknown conditions of consumer’s available space. I predict that in most cases this will result in a general lowering of capabilities to an agreed upon lowest common denominator.

Head Tracking Internal vs External

Tracking the user’s head and updating the rendered scene is vital to the VR experience. It needs to be accurate and fast. If the user’s rendered scene doesn’t move 1:1 with their own head, Sensory Mismatch kicks in with full force and the experience becomes terrible fast.

But apart from accuracy and speed, there several dimensions to worry about, falling into the categories of position and rotation. The Oculus Gear has no positional tracking. Instead it uses an accelerometer to track rotation only; that is it covers pitch, yaw, and roll. Positional tracking in Wireless devices is an area of heavy interest for VR researchers, but not yet a viable feature.

Constellation vs Lighthouse

The Oculus Rift and HTC Vive each support both rotational and positional tracking, but they are taking two different approaches.

The Rift is calling their tracking system a Constellation, which is an “outside-in” solution. From what I understand, a camera sits on the user’s desk pointed toward the user; connected to the computer not the headset. The headset has a group of flashing IR LEDs on its surface which are visible to the camera. Based on what lights are visible when, the tracking system can analyze the image from the camera and determine the position and orientation of the headset relative to the camera.

Since the camera is connected to the computer, the processing is done on the computer with the simulation; all the headset has to do is shine bright like a diamond. Because a camera is being used, it’s possible to include an RGB camera (not just an IR detector) to take video of the physical world to send into the simulation. Oculus has said that multiple cameras will be possible, and even necessary for their Oculus Touch handheld controllers.

The Vive is using a system called a Lighthouse, which is an “inside-out” solution. They have a minimum of two Lighthouses arranged on opposing sides of the room. The Lighthouses are essentially just laser projectors; they are not connected to the computer or the headset (just power) and they don’t sense anything. Meanwhile the headset has a network of sensors on its surface which can see where the lasers are hitting the surface of the physical environment, this information is then processed to figure out the position and orientation of the headset relative to the Lighthouses.

Since the Lighthouses themselves are relatively “dumb” (passive), it is easy to add more of them to the system expanding the covered area down the hallway and around corners, though for Wired devices the cord still limits the user’s mobility.

Another “inside-out” tracking technique that was tried; Steam had a version where they covered a room in QR Code stickers. The headset then had hardware which looked at the stickers and figured out its position relative to them. But that wasn’t deemed a viable solution for most consumers.

I don’t want my room to look like this.

Free Hand Simulation vs Handheld Controller

Imagine for a second that headsets were perfect. The simulated scene looks great, and tracking is flawless. You still want to be able to interact with the scene. The headset alone will let you look around, but apart from a very simplistic scheme of interaction by staring at things, the user will need more input.

One option in the Wired device case, is simply to keep them sitting at the computer with a mouse and keyboard. However, this doesn’t match up well with the VR experience. The mouse is mapped to a 2D surface, and the keyboard mouse combo requires the user to face generally in one direction. It can work in simulations where the user is in a spaceship cockpit, and the ship itself is indirectly through joystick manipulation (rather than the user’s body), but clearly fails in Walkabout scenarios.

When the user is standing or walking around, it is better to have more of an analog to the body. Either the hands themselves, or free floating joysticks held in each hand, or props such as a gun-shaped controller.

Actually tracking hands themselves turns out to be very difficult. Fingers are relatively small, and rotating hands or clenching fists mean they get occluded from view by an outside observer a lot of the time. The Leap Motion controller (not to be confused with Magic Leap) is a solution specifically set up to detect hand gestures; when used with VR, it is often attached to the front of the headset and can observe the user’s hands and report general position and orientation to the simulation. By knowing where most of the hand is, even when some fingers are occluded, their orientations can be estimated. However, if the user holds their hands out of view of the Leap (such as down at their sides, or behind their back), the hands become untrackable.

Oculus and HTC have both put forward motion tracked handheld controller solutions; by knowing where the controller is they can assume the position of the hand, and using inverse kinematics they can infer the most likely orientation of the hand as it is connected to an arm, which is connected to a shoulder which is connected to a neck which has a head which is wearing the tracked headset. Furthermore, the accuracy and speed of tracking hands this way is low relative to handheld tracking.

While position tracking of handheld controllers seems to be very adequate, and the presence of buttons and triggers allows additional intent input, the hand must hold the controller so common hand gestures (such as waving, pointing, or thumbs-up) cannot be made or tracked. Some attempts have been made to simulate gestures; for instance, if the user’s thumb is not resting on a thumb trigger, it can be assumed they are giving a thumbs up. Similarly, if the index is not resting, perhaps the user is pointing. A simulation can then render a thumbs up or pointing gesture relative to the controller position, without knowing the actual positions of the fingers.

VR vs AR

“Virtual Reality” is essentially your eyes enclosed by a fully simulated world. “Augmented Reality” is your eyes seeing the real world, with the addition of some simulated images overlayed. Google Glass was probably the biggest AR device to make a splash in the general public. Microsoft HoloLens may be the next.

The major advantages of AR include less disorientation because the user sees the normal world the way they are used to, and the ability to use AR while interacting with the real world. The former is a nice to have, but the later is a very powerful feature that separates the basic intention of AR into a different category than VR.

Current AR

As of the current writing (Oct 2015) Microsoft has only announced a developer edition of the HoloLens, and the price tag is $3,000 USD (compared to $300 USD for the original Rift DK1).

Clearly AR hardware is not going to hit the mass market until several (or five) years behind VR at the current rate. And what’s with the giant price difference?

To begin with, Microsoft has taken a particular approach with their device. HoloLens is expected to be a standalone compute device. That is, it’s wireless, and it’s supposed to be as powerful as some computers. That means more energy consumption, more CPU heat, and more processing power than just a cell phone strapped to your forehead (like the Gear VR).

Which is ironic given that all of the demos they’ve shown HoloLens in so far have fit well within the bounds of what a wired device can do (sitting at a desk, or standing in a room). But even if they went with a wired solution, the HoloLens would still have to accomplish more than the Gear VR: it needs high-precision inside-out world tracking, and wide field of view holographic projection.

The way AR works is projecting light into the user’s eye in combination with allowing the natural light of the real world in. Currently the HoloLens has a very narrow projected field of view (only about 30 degrees in width, and shorter than that). Anything outside of that essentially just gets clipped (it disappears). So if you’re dealing with objects that are positioned relative to the real world they can only be several inches tall, or the user will have to move their head up and down to see all of it.

You can’t even see the entire demo app’s UI in one go.

Meanwhile, in order for those objects you CAN see to be believable, they need to feel like they’re fixed in place relative to the real world. That means when you turn your head, by 2 degrees, you will see the real world coffee table has moved by 2 degrees, and that hologram needs to also have moved by exactly 2 degrees. This requires very accurate inside-out position tracking, in both rotation and position, and without the use of an external anchor like the Lighthouse solution.

So $3000 might seem like a lot, and some have speculated the price is purely to put off developers who aren’t seriously in it for the long haul, but considering that in addition to all the fancy AR-specific gear it’s also got to have the processing power of a current $1000 laptop, and fit on your head, that seems reasonable (assuming you’re sticking with keeping it Wireless).

I have seen no HoloLens demos that involved walking around outside. Maybe it’s just not ready yet for that kind of demo, but that is where the ultimate promise of AR comes in: being able to apply to and enhance ANY situation anywhere in the world.

AR as a subset of VR

But my money is still on VR for now, especially since you can consider a scenario where AR is a subset of VR. While the display technology between the Rift and the HoloLens is vastly different, it would theoretically be possible to add an array of cameras and feed the video back into the simulation and eventually the headset. Figuring out the position and distortion of the external camera lenses necessary to simulate what the user’s eyes would have been seeing if there weren’t a device wrapped around their face is not easy, but it might be possible. And then you would have a device that could handle both AR and VR (and without the severe Field of View limitations of current holographic projection).

Open Source vs Proprietary

Another thing to look at in the world of VR is the concept of Open Source hardware and associated ecosystem software (drivers, SDKs, App Stores, etc.) vs Proprietary ones. OSVR and Valve’s OpenVR are two examples of Open Source initiatives, but the driving force right now are Proprietary solutions.

The consumer products that Oculus, HTC, Sony, and others are bringing to market are not going to be the top of the line hardware that is possible. They are going to be stable, efficient to produce units that can be mass marketed (relative to the price of the GPU required to run the things).

With Open Source hardware, smaller companies can get some skin into the game and compete with potentially lower-end devices, or higher-end, or try to add their own technological improvements. And Open Source software can let middleware vendors target one common SDK instead of having to handle an onslaught of different proprietary devices separately.

There are plenty of opportunities for hardware and software solutions surrounding VR that are not currently offered commercially. How about a dedicated Wireless headset (not a repurposed cellphone with lenses), or a Wired AR solution. I’m sure branded VR App Stores will pop up like mushrooms once the commercial devices have taken root.

But beware statements like, “X needs to make their stuff Open Source and then they’d have lots more developers and beat the Y.” Open Source does not mean market superiority. Commercial developers need to be paid for their work; whether they’re on top of an Open Source device or not doesn’t matter. The market drives developers, not Open Source.

Technical Details

VR Sickness

When VR is done incorrectly it is very easy to cause motion sickness and disorientation. Symptoms include sweating, eye strain, fatigue, nausea, headache, lightheadedness, and disorientation; and can persist for some time after they have stopped using the VR headset. Clearly not a positive experience.

Work by Valve, Oculus and many other researchers over the last few years (but with roots going back decades to military research) have come to define with a great degree of accuracy exactly what will definitely cause sickness and how to avoid it for most users. Things like refresh rate, blur when the user turns their head, lack of anti-aliasing, dynamic motion over high contrasting scenes, all have an effect; and commercial devices are being rigorously tested to ensure they perform well to avoid inducing problems. But not everything can be solved with better hardware; things like Sensory Mismatch largely have to be solved by design.

There is also a large variance of toleration among the tested population. While the CEO of Oculus, Brendan Iribe, is very sensitive to VR sickness and can’t stand most simulations for very long; their CTO, John Carmack, has spent many long hours in VR. And most people will adapt, such that with repeated exposures the negative reaction is reduced or eliminated.

Refresh Rate

Normal 2D screen based applications typically need a refresh rate of 30-60 frames per second (that’s ~20ms per frame). But in order to trick the eye at a biological level into believing the VR simulation we need to have a total system latency of 20ms. That means the time from when the user moves their head, to the time the screen redraws at the updated position has to be 20ms or less.

In order to achieve this, we need to squeeze the time necessary to update the screen in order to make room for the head tracking and simulation update steps. The general consensus is that 90 to 120 fps (resulting in a draw time of ~9.5ms) will give enough room for the rest of the dominoes to fall in place in time for believable VR. But display technology for the last decade has targeted 60hz, so VR hardware manufactures have had to produce new screens to support the faster refresh rates necessary.

In addition, graphics card manufactures have been adding more transistors and more RAM to support higher bandwidth graphics processing, also targeting 60hz. This results in a high latency overhead that can make it difficult to achieve faster output on a consistent basis. Due to the need for VR to have low latency, new driver modes have been added by graphics companies to cut down on some of this overhead and favor immediate quick output when attached to a VR display.

Low Persistence Displays

Just showing simulated frames on the screen at 90 or 120hz is still not enough. When a photon of light is emitted from a display, it hits the retina in the back of the user’s eye and “excites” the photoreceptor cells (rods and cones) there.

Eyes are the window to the soul; also to the brain.

The excited cell transmits information to the brain, but it doesn’t immediately start and stop; it warms up and cools down. This means that staring at a white pixel on a black screen, then panning that pixel across the screen will result in a blurred white line that fades quickly over time. When this is applied to a fully rendered scene, you get “blurred motion” when looking around as the colors are basically smeared across the retina.

Researchers discovered that instead of constantly leaving the pixels of the display on, by turning the individual pixels on and quickly turning them back off, a short burst of photons is released that reaches the user’s eye and excites the photoreceptors to tell the brain what color they are seeing; but because they are quickly turned off before the next frame is displayed the cells have time to cool down. If the pixel shows the same color the next frame, the cells excite again with the same information, and to the brain the pixel seems to have stayed the same color the whole time. But if it changes color, the photoreceptor cells can warm up to the new color immediately without mixing it with the old color, because they’ve already had time to cool down.

This is called Low Persistence, and commercial devices are using low persistence OLED displays in order to achieve this.

Field of View

Another goal post for good VR is a display which matches a user’s natural field of view. The Field of View (or FoV) is the angle from left-to-right (and separately from top-to-bottom) which the user can see without moving their head. Human vision has arranged two eyes across from each other in a horizontal arrangement leading to a larger horizontal field of view than the vertical.

Typically humans have approximately 200 degrees horizontal FoV and 130 degrees vertical. When you break it down per-eye, it becomes much more complicated (each eye is blocked in one direction by the nose, and has a greater bias down than up); and there is the matter of lenses needed to warp a flat LED screen to seem like each pixel is equidistant from the eye. But for considering generally how closely a VR solution matches the human perception model these numbers are useful.

The Oculus Rift DK2 had an FoV of ~110 degrees, and by some accounts of those who’ve tried it, the commercial version seems about the same. In comparison, watching a movie in iMax from the center seats should feel approximately the same.

Some attempts at wider FoVs, such as Wearality, have produced closer to 150 degrees, but instead of adding pixels to truly scale up the experience, they reduce the amount of overlap between the eyes, causing a section of space to seem less apparently 3D. The StarVR headset on the other hand, increases doubles the resolution of the Rift/Vive to deliver ~200 degrees horizontal FoV, but with greater resolution comes much greater GPU requirements.

Sensory Mismatch

Located in each of the ears, the vestibular system is the part of the human body which provides motion and gravity detection, as well as spacial awareness. The sense of touch from the skin, and the sense of position of the body’s muscles and joints are called the proprioceptive system. The body takes the information from the eyes, the vestibular and proprioceptive systems, and integrates them to understand its position and orientation in the world and to keep its balance when standing or sitting.

When information from one of the systems conflicts with the others, the brain can become disoriented or confused, and the body can lose its balance. These symptoms feed into VR Sickness. But the brain can also adapt to changes and ignore differences as long as they remain consistent.

For instance, if a user experiences a simulation where the world around them is moving forward, but they are sitting still the information from the eyes will conflict with the information from the vestibular system. The information showing textures moving past the user suggest forward motion, but the signals coming from the vestibular system suggest there is no motion. Initially the brain will panic, but if the acceleration is constant, it will adjust and accept the known difference between the two.

However, if the acceleration is not constant, the difference between the two systems will continually change– the eyes say there is more and more speed, but the vestibular system says there is still no speed. The difference between the two is growing, and the brain has to continually adjust, causing noticeable disorientation.

Even if the vestibular system does pick up changes, if they don’t match input from the other senses problems can occur. In example, if the user is rotating their head from left to right, the vestibular system will report the change in position. The simulation needs to detect the motion and update the scene displayed to the user with exactly the same changes or the eyes will report a different rate of change than the vestibular system; causing a mismatch.

Or say you are facing a wall in the simulation, and you move your hand forward to touch it, but in reality nothing is there. Even just looking down and not seeing your arms and legs can be alarming. You will feel a wave of disorientation as you adjust to the difference between what your eyes are telling you and what your proprioceptive system is telling you about the position of your body.

However, I believe there is a difference in the reactions people have between mismatches from the vestibular and proprioceptive systems. When the vestibular system is alarmed, it seems to happen at a much lower, unconscious, level. People feel sick and can’t easily explain why; often users will instinctively close their eyes in this case. However, when proprioceptive senses are offended (such as an arm through a simulated wall) the reaction seems to be at a more conscious level; reactions are less extreme, and users can understand consciously what is wrong and change their body positions to help reduce the mismatch.

Simulation Freedom

Simulation Freedom is the amount of freedom you have to design a particular simulation. This freedom can be limited in VR by a number of hardware solution specifics, such as whether or not the device is Wired or Wireless, has a full 6 Degrees of Freedom (rotation and positional tracking) or only 3 DoF, the selection of input devices available to the user, and in some cases the size and shape of the physical space the user is in.

In some ways the number of different possible scenarios the end user can bring to the simulation can limit the designer’s freedom, as they have to somehow account for them by either handling a lowest common denominator of compatibility; limiting the number of set ups that will be supported; or attempting to support everything which is prohibitively costly and becomes outdated when additional scenarios arise.

Consider the case of input devices. A user could have a mouse and keyboard, flight stick, an XBox controller, a position tracked handheld device like the Oculus Touch, or nothing.

With the lowest common denominator, a designer would handle this situation by designing a game that can be played without any controller at all; but that imposes severe limitations on the design.

By limiting the number of set ups, such as keyboard and mouse, joystick and XBox controller, you can now assume all users will have several buttons and directional inputs, but some will be forced to remain facing forward (keyboard and mouse, and joystick) in order to use their device. Or chose a different set, XBox controller and position tracked handheld devices; now you can assume the user can walk around and face different directions, but some of your users will have positional tracked hands and some won’t.

Lastly, you can attempt to handle them all individually. This requires much more effort, and in the case of multiplayer simulations means that players may have an unequal amount of control in the game which can be detrimental to balance.

If anything the theme here, not unlike the rest of VR, seems to be; it’s all a giant balancing act.

Minimum Viable Product

You may hear a lot about “MVP” or Minimum Viable Product. It’s a catchphrase that means, “what is the minimum set of features and quality we need to release a product that won’t immediately flop”.

Another catchphrase is “Don’t Poison The Well.” One of the worries about VR is that someone releases a terrible product, a lot of people get VR Sickness or the quality is just not worth the price and the consumer market will lose interest and the market will collapse before it’s got started.

So they don’t want to release a terrible product, but at the same time they need to release something, and it has got to be while the market still has interest; and it has to have the right price point.

That’s where finding the MVP comes in. The first real consumer VR products (not “Dev Kits” or “Explorer Versions”) on the market are going to be what these companies think represents the Minimum Viable Product. They aren’t the best experience possible with modern hardware, they’re the best experience that they think enough people will enjoy that can be mass produced at an affordable price. Keep in mind that the minimum graphics card required for a Wired product is about $350 right now, and that’s before you buy a VR headset.

Closing Notes: Form Factor

Focus on the Form Factor.

In times of changing technology, people get excited and confused. “The television will replace the radio.” “The laptop will replace the desktop.” “Robots will replace humans.” All of these exist together.

Wired and Wireless headsets are two different form factors, it’s not a choice of one or the other. Simulations where you walk around or sit in a chair are both valid and have their own pros and cons that don’t cancel each other out. The technology is changing, but aspects that are inherent to a particular form factor will not change.

Headsets will become lighter, GPUs will be able to process more and more realistic scenes, displays will increase in pixel density. But a Wired device that offloads computing into a compute machine that doesn’t have to be mobile will be more powerful than a Wireless device. People will become adjusted and more tolerant to VR sickness, but simulations that reflect user motions 1:1 with reality will have less of a struggle with Sensory Mismatch. And the market may settle on a few peripheral input devices, but variations of form factor there will necessarily challenge designers as well.

In predicting what will come next to VR, the unchanging aspects of form factor are your rock in the sea of uncertainty.

A Bunch of Reading

Here are a bunch of links to things about VR that I bookmarked; you may find them interesting.