A month ago I met a friend of a friend who was testing Glass for Google. He let me try it on in the back room of a quiet pub. Aaron Parecki both experimented with its many features.
The features of Glass are not “consumptive”, as in, they don’t cause you to get away from reality. Rather, I’d call Glass’s features “active”. Think of every time you’d like to capture a moment, get driving directions, or check the time. Current technology forces one to take their phone out of a pocket to preform a task, whereas with Glass it’s right there. This is not a media device for sitting back and getting information to you. It’s a device that allows you to quickly act instead of pause and grab your device from your pocket.
Glass is a piece of Calm Technology
Glass is, by default, off. Like Mark Weiser’s words on Calm Technology, the tech is “there when you need it” and “not when you don’t”. This makes Glass a perfect example of tech that gets out of the way and lets you live your life, but springs to life when you need to access it.
Audio and Touch Input
The interface has two input types, audio and touch. You nod your head to turn the display on, then you can say “Ok Glass, search for “x”, or simply tap the side of your glass to go scroll down the menu. The real world is noisy. Having two input types is important. I suspect that Glass may have a difficult time recognizing your speech if you have a heavy accent.
Driving and Walking Directions
This feature presents directions in a calm way that leaves you attentive on the road. Transit and biking directions were not implemented when I tried Glass, but one can imagine how helpful both could be. I used to sketch out a map and tape it to the handlebars of my bike before. Being able to have an ambient understanding of where one is and where one needs to go next will be very helpful. I use the word “ambient” because it truly is ambient. It is not obscuring your vision or taking you away from reality – it is adding to it.
Video had some bugs in it still when I tried Glass, but it was a very pleasant experience to be able to quickly record something. This is the feature I think people will use least with Glass. It is ironically the feature Glass-critics are most antagonistic towards. Recording video all day from one’s Glass makes no sense. Recording special moments does. Recording significant events such as the Boston Marathon Bombing make even more sense, especially if it helps people to gather evidence for who an attacker is. Recording all the time will quickly wear out a Glass, and worse, will require a lot of editing after the fact. The Memento Lifelogger is a much better bet for all day recording, as it clusters photos taken at frequent intervals into “events”, making it easier to search through and find the information you’d like to gather.
Being able to take a quick photo was wonderful. It’s not seamless as critics might think. In the same way that all features of Glass are implemented, one must wake up the display and either verbally ask Glass to take a picture or tap the side of Glass to record the image. An external observer can easily see that a Glass is on, and like one can tell if someone is on a cellphone by the way the phone is held up to the head, one can see that Glass wearer is about to take a picture. Glass is not like a Bluetooth earpiece. There are significant signals present for one to see if a Glass wearer is using the device. I think Glass critics fear that Glass users will persistently record and take photos and no one will be able to tell whether Glass is on or not. Rest assured, most Glass users will likely be using their devices for mundane everyday tasks like way finding and reading text messages. Critics fear of Glass devices is akin to a person fearing that what they post on Blogger will be read by the entire Internet instead of being read by two of their friends and a random user coming in from search.
Glass provides one with a very well-designed and easy way to search by voice. Google results come up in a minimal format that’s easy to read on the tiny display. There’s actually an auto-summary feature that automatically summarizes the information you’ve search for. I tried the phrase “Ok, Glass, search for squirrels”, and glass gave me a summary of what squirrels were, along with images. It reminded me of a smarter, quicker version of Qwiki, a knowledge summary product that received quite a bit of attention in 2011 when it was first demoed at a startup conference in SF.
Google Glass is truly the culmination of work started by pioneers of the wearable computing movement. Thad Starner, a grad student of Steve Mann, is working directly on the project with Google, meaning that the Glass project has at least 3 decades of knowledge put into it. There have been a lot of HUDs out there that haven’t been correctly built or prototyped for everyday use, and Thad’s everyday wear lent a lot of insights into this product.
I’ll have a longer report on Glass after my time at the Google I/O conference this year. What are your thoughts on the device? Have you tried it yet? Would you wear it daily?