Google Glass is arguably the most exciting innovation in mobile technology since the iPhone — and not just because Google hyped it with a spectacular skydiving stunt. Certainly, the field of wearable technology has been around for decades (remember calculator watches?), but with the backing of a multibillion-dollar company and state-of-the-art wireless tech, Google looks to really take it to the next level.
There's a reason, however, that the device, which Google officially unveiled about a year ago, probably won't reach consumer hands until 2014: Creating a head-mounted display that's reliable, useful and not-ridiculous-looking is hard. Just ask Vuzix, which has been in the space since 1997."There are a raft of technical challenges to designing a head-mounted display," says Paul Travers, CEO of Vuzix. "Generally speaking, no company has solved all of the problems to do this really well yet. Addressing the mass market is even more difficult because users are very particular about what they wear and they have high expectations for the products they use."
Google declined to comment for this article, but it's already revealed a lot about Glass via the Project Glass website, posts on Google+ and demo events with developers and others. It's also applied for patents on Glass, the most recent of which describes some aspects of how the device will work.
Broadly, the idea is straightforward: Put an always-on connected device on your head that's capable of listening to you and giving feedback — either via sound or on a display that appears to float right in front of your eyes.
When you break it down, however, the details are elusive: How do you make a display that will both be clear and not interfere with your normal vision? How does it connect to the network — via phone or itself? If it's via phone, how do you account for different models and wireless technologies? And is it really possible to create a head-mounted display that doesn't make the wearer look at least a bit silly?
Given the multiple variables Google needs to get right, it's taking things slow with Glass, although it's clearly already thought through most of the broad questions of how it will work. Looking through the material on Glass that Google has so far made available, user experiences and current head-mounted displays, we can begin to draw a picture of how the product will work when it's finally available for everyone to buy.
All head-mounted displays involve near-eye optics to create "virtual" displays. Our eyes are used to focusing on distant objects, and displays on headgear need to essentially "fool" it into thinking it's looking at a screen far away. The methods of doing that are fairly well known and can even create virtual 3D objects if both eyes have screens in front of them (Google Glass doesn't).
The current version of Glass — the one we've all seen — uses a prism placed just over the wearer's right eye. Within the prism are a mirror and beam splitter that renders the projected image as a display. Travers sees it as an excellent first step, although there are tradeoffs.
"From our experience this design has limits around display size, focal lengths and the resultant field of view that can be attained for a given size 'prism.' For larger fields of view, the optic and the display need to grow in size, which also forces larger eye box requirements on the optics which again push the optic to get even larger. All this goes in the wrong direction for a fashion statement. But for now it seems exciting enough to get started."
Travers says holographic waveguide optics — where light is "squeezed" into a window in front of the eyes and then "released" to create the image — have the most promise, and Vuzix is pursuing the technology in earnest.
"Unlike bulky conventional optics that refract and bend light to magnify it, a holographic waveguide is a thin plastic window that you inject a light into on its edge," he describes. "The waveguide can use very tiny displays and they get moved it into the temples, and it will fit nicely in conventional eyeglass frames."
on Glass actually indicates several variations on the display
Google's patent on Glass actually indicates several variations on the display. Most involve some kind of screen rendered to the user either through mirrors or directly on the lenses of the glasses. However, one method described cuts out the middleman by drawing images directly on the wearer's retina. Besides the knee-jerk fear many users might have of projecting a screen right on their eyes, there are other issues with this kind of "retina display."
"Retinal display technology has been around for a long time, and so far its biggest problem is that the eye box is so small that your eye cannot roam around the image without losing the picture," says Travers. "It is very difficult to use."
Commands and Control
For Google Glass to be at all practical, it'll have to work anywhere, which means — absent its own 3G or 4G connection (and accompanying carrier fees) — it must tether to your smartphone for an Internet connection most of the time. Indeed, Google has shown that this is how Glass works, although it's not entirely dependent on the phone — if you're on a Wi-Fi connection, it can connect to the Internet directly.
Glass will theoretically have everything onboard that it needs to function as a standalone device, including GPS, sensors and a robust processor running a custom version of Android. Glass will have the ability to capture HD video and upload it to the cloud, so a robust wireless connection matters a lot. Like today's phones, users will certainly be able to limit its data consumption when tethered.
Glass has a touchpad on the temple of the frame, and it also responds to voice commands. Users engage various commands — such as snapping a picture or searching Google for a translation — by speaking. In the world of Google promotional videos (see below), it's brilliant execution. Reality may be a different story, however.
"On practically every device from tablet to phone to PC, we still use a keyboard when even now we could just talk to the computer to write emails or get work done — so why don’t we just talk to the computer?" asks Travers. "Sometimes it is not 'natural.'"
Travers believes that gestures — using the glasses' sensors to detect where your head, eyes and hands are in real time — are one of the most promising ways to interact with head-mounted displays. At SXSW, Google showed more about how Glass will interact with gestures, although it focused mainly on head and eye movement. That's just the beginning, Travers says.
"Gesturing, or rather reaching out and interacting with virtual objects with your hand, is also going to be a viable interface in the future. But it also has some drawbacks. Typing on a desk or sliding icons left and right we think will rock. How you look while doing it, though, has potential ramifications in certain settings."
One of the concerns most often voiced about Google Glass is the safety issue: That putting a screen directly in a person's field of view will quickly lead to people running into poles, or worse. Similar to heads-up displays for car windshields or augmented reality on phones, anything that takes away a viewer's attention from what's in front of him is dangerous, critics say.
Google Glass deals with this issue in a simple way — by having the display be as unobtrusive as possible, and only in one eye. The screen appears as if it were a small display hovering a few feet in front of the wearer, but it's easy to shift your focus to what's in front of you, ignoring the "screen," according to people who have worn it.Safety concerns about distraction also arguably miss the point. The point of using Glass is to eliminate the need to look down at a smartphone to interact with incoming data — something that takes 100% of your attention off of what's in front of you. Surely looking at your surroundings with a tiny, hovering display is safer than taking your eyes off them completely.
"Use the HUD in the windshield of a car as a comparison," says Travers. "Having directions appear in front of you without having to look down at the dashboard allows you to keep your eyes on the road. The same will be true for an head-mounted display — when done correctly. Of course, anything can be abused."
When speaking to developers, Google has said apps for Glass should complement what the user is doing, not take them away from it (i.e. "Don't get in the way"). An app for The New York Times, for example, might alert the user with a headline and photo, but it's not a device for immersing yourself in an article. Apps for Path and Evernote are similarly unobtrusive.
Wearables After Tomorrow
While many ridiculed Google Glass when it was first announced (Conan O'Brien recently got in on the joke), the excitement has been building ever since. Developers rushed to reserve their $1,500 units at Google I/O 2012 (which will ship in the coming months), and the first developer event for Glass filled up extremely fast. Warby Parker, designer of hipster eyewear, is rumored to be making the final product.
When Glass finally arrives on store shelves, it will surely generate unprecedented attention to wearable technology, potentially kicking off a lot of innovation in the category. Apple andSamsung are already rumored to be developing connected wristwatches to get ahead of this trend.
Riding that wave of innovation, the head-mounted displays five years hence will be to Google Glass as the iPhone 5 is to iPhone 1. Holographic waveguides, pico projectors and ultra-minimalist designs are all possible evolutions — that is, as long as Google Glass does its job as a trailblazer, and competitors such as Vuzix, Sony and others move the needle forward.What many competitors lack, however — and Google has — is a platform to anchor the technology. Cybernetic eyewear on its own is an intriguing novelty, but if you tie it directly into things like maps, video conferencing and a social network, then you've really got something. The idea sounds like it can't lose, except it needs the one thing Google has never quite demonstrated it can actually make: outstanding hardware.
Will Google Glass be the exception to that rule? And how do you think the final product should work and look? Share your ideas in the comments.