I went to Augmented World Expo where I showed my augmented tapestry about times in Alaska using wildlife augments and live informatics about the Funny River fire in Kenai National Forest, where my photo is from.  Also, sat down on day one with and had breakfast RA co-blogger Amber Case, which was a joy.

But that’s mostly non-sequitur and not the real meat of the story.

In that I see our cometary blog member Bruce Sterling from time to time, I want to say something in a friendly way – sometimes I kind of hate him.  Why? Because he has this damnable habit of being pretty right.  At AWE, it meant a huge shift towards wearables, which is what he talked about at SxSW, atemporality in the way technologies are looping around back to goggles like Oculus Rift, and design fictions.  But then, I wrote that media would go off the screen, into the hand, onto the body, and then into the world in 1999, so I’m not that surprised.

I think I’m more surprised that, as my colleague Scott Kildall once said, “It’s all (starting to) happen…”

AWE was incredible for its diversity this year in that there was a major shift to big industry and to strange and wondrous things like an augmented circus side show happening in the lobby. So, the bell curve is broadening, and there is an obvious market penetration with Intel and Bosch setting up major exhibits as well as the usual niche giants like Metaio, Qualcomm, and Daqri.

What seemed odd (but expected), was a lack of “speculative” vendors like the plastic bottle recycler for 3D printing or the hydrogen cell maker.   There was, in a way, a certain coherence to the field that is starting to show its maturation.

AND, the Meta One developer glasses finally came out.  There was a party on the first night at the Meta compound in the mountains (which seemed eerily like the most beautiful cult compound you’ve ever seen), and we actually got to see this stuff.  And, as a refreshing conversation with the guys at Atheer (another goggle startup much like Meta,) informed me, we’re getting into beta, and things are a little behind schedule.  Not a surprise, but Meta had a curtained-off area where they demoed the works, making their tech even more mysterious.  Having tried them, I’ll just say that they are making major milestones, but are far from the video “A Day of Meta” on YouTube.

However, not everyone’s behind the curve.  Epson came in with a bang with the new BT-200 glasses, now with accelerometers, GPS, and a high-def camera, with which they were openly showing live see-through AR applications. Deservingly, they got the Auggie Award for their efforts as Best Headset for this year.  Also of note was the Structure 3D scanner for the iPad, at $379 seemed like a serious bargain, as they were shipping dev kits currently and shipping a first run in July.  The only problem for them is that their structured light (SLAM) scanner uses the Primesense chip, recently purchased by Apple, so it’ll be interesting to see how their technology changes.  But it was amazing to see an affordable 3D scanner out and live in the field.

Something also of note is that this year’s AWE had very solid vendors and programming, and what was remarkable was that the original Boeing conception for education and maintenance/assembly in 1992 is developing beautifully.  Expect your mechanic to be strapping on a headset in the next five years.  While not as sexy as AR robots, 3D scanners, free-range outdoor AR gaming, and cyber-shades, this year’s industrial applications and the appearance of the IEEE Standards Group merely show that AR is a serious enterprise, and is going to be part of the global technological toolkit.

This year, AWE was brave in its vision of embracing visions and speculation as well as the increasing nuts-and-bolts application of the technology, but it also had some ups and downs.  During the awards ceremony, an improv troupe did a hysterical live Glass-fed performance driven by live tweets to the players.

But there are times where an amazing concept gets ruined by a bad delivery.  For the closing remarks, I was deeply disappointed by Sundance New Horizons exhibitor, Rapper Yung Jake, who is one of the current crop of Post-Internet artists, in his case making the first Augmented rap video.  However, he came on stage, seemingly thinking that the audience would be thrilled by an iPhone shot video for some odd reason, and seemed so unprepared that he had to run his video from YouTube. This reduced him to showing a live feed from his iPhone, swinging it around with a combination of mugging into the camera along with showing the augment, and instead of doing a third number, he was asked not to by AWE organizer Ori Inbar.  I felt sorry for Ori, as the supposed breakout star in New Media seemed to not take the audience seriously at all.

So, another AWE has passed, and the development of the niche is turning out to be a little “lumpy”, but it’s moving along.  I expect the form factor on the AR goggles to be to consumer expectations in 2-5 years, the industrial applications are becoming extremely robust, and there is a lot of very creative work being made, like BC “Heavy” Biedermann’s AR murals.  But despite this lumpiness, I think AWE is showing AR as making huge strides in its adolescence, and is quickly maturing as a larger genre technology.


Warning: Name Drop Alert…

I’m returning on the flight after five days of panels, the Create hackerspace, and the trade show floor at SXSW Interactive. As expected, this year’s SxSW was bigger than ever, but the tone of the event seemed to be one of more than a little disarray. Not in the extreme sense in that the conference was utter chaos, but the content seemed very different this year. It seemed to be about people and about identity; as in searching for it. As Bruce Sterling said in his closing speech, this year was almost as much about who wasn’t there as who was and what wasn’t there as what was. Hopefully, I’ll unravel that in a minute.

First, don’t get me wrong; the conference was brilliantly put together. The keynotes were amazing, like Chelsea Clinton, Neal DeGrasse Tyson, Adam Savage, Julian Assange, and Edward Snowden (who, in my opinion, gave one of the most crisp, concise deliveries I ever heard.)

But this lineup in itself shows my point.

Bill Barker's graphic for Cyberpunk 2014: "In the Future, Everything Will Work"

Snowden and Assange (and Grumpy Cat) make sense for a cyber/interactive media conference, but the others make more or less sense, although many of you will disagree, but I make this distinction specifically. Where there were a number of amazing panels that explore the potentials of the interactive noosphere, there also seemed to be a lot of “blunderbussing”, or shooting intellectual grape shot, hoping to hit as much as possible.

Virtual Snowden

By this, I mean that SxSW 2014 seemed to get lost in itself at times in having a trade show floor without vendors like Getty Images, Microsoft, or Dell. A good third were devoted to regional/international entrepreneurism with few distinct products, half to web services and the rest to a mish-mosh of schools, a weird Michigan entrepreneurial booth that had a strange form of entre-Cricket. And for God’s sake, what was Charmin Bathroom Tissue doing with a pavilion, talking about “bums” where Leap was a year ago? Being a featured speaker for the “WTF and Beyond” sessions, I think I have some expertise in this.

The Create pavilion in the Long center seemed to be the gadgeteer’s heaven, with my favorite being a 3-D-printed hexacopter spy drone. However, the pavilion seemed similarly mish-moshy. Let’s leave it at that. This year there was no Leap Motion, maybe a new set of Epson goggles, but there seemed to be no new great “leap” in technology. In this areas what seemed most interesting was a “cognitive cooking” pavilion which would take your preferences, regional styles, and come up with a fusion dish, like my Turkish Kofta, Granny Smith apple and strawberry kebab, It was totally unexpected, and delicious.

This year seemed to be about people, from branding via cat videos (Braden, et al) to Edward Snowden’s concise treatise on cryptology, which impressed me in that I believe him to be 1: very smart, and 2: a true patriot.

Ito, Suarez, Sterling and Ellis

My panel with Jon Lebkowsky on “The Cyborg Gaze” still had people in the center of the technological conversation, even if our mass consensual surv/sousveillance complex abstracts the gaze a degree or two. And this carried forth into the Friday Night EFF party where Gareth Branwyn gave a heart-rending reading from his new book Borg Like Me, and Bruce Sterling and Cory Doctorow got into a heated discussion (all the more impressive in that Doctorow had a massive case of jetlag), along with Chris Brown and Branwyn, about how things may or may not have gone wrong from cyberpunk to the Silk Road.

Cyberpunk 2014 Panel

Sterling eviscerated the purveyors of the Liberator handgun and Silk road as just taking the most despicable, rule-34 laden, trollish route possible, stating that “this wasn’t the future he wrote about.” Afterwards, Sterling’s wife, Jasmina Tesanovic, sang a stirring cyberfilk version of the “Battle Hymn of the Republic,” renamed “Hacker Hymn,” encompassing the battles for freedom and privacy. In addition, the Museum of Computer Culture showed a masterful set of new projects alongside historical works like the legendary Hypercard stack, “Beyond Cyberpunk” with images from early zines like print bOING bOING, Fringeware Review, and Mondo 2000 showing onscreens on the wall. My favorite was a 64K pseudo-miniframe with programming toggle switches and a wild “infinity mirror” device made out of a waterjet cabinet. Beautiful.

Tesanovic and Ben Hayoun

Although I sound a touch vitriolic, I have left my criticisms for the first half of the article, as the panel with Tesanovic and Ben-Hayoun about the International Space Orchestra was fascinating. Fascinating not only about making opera about the Apollo 11 moon mission, but for Ben Hayoun’s dogged insistence on working with NASA as an artist-in-residence, which eventually got funded by San Jose Bienniale Zero01 for the opera and the creation of a small .mp3 transmitting microsat.

Ito and Sterling

However, the panel that was the mind-blower (I digested the material so thoroughly that it just seems like part of the furniture) was the Beyond Cyber panel, led by MIT Media Lab head (and this year’s SxSW Hall of Fame inductee), Joi ito. The panel included Transmetrpolitan author Warren Ellis, Daniel Suarez, Joi Ito, and of course, Bruce Sterling. The conversation was wide ranging from drones, robots, design fiction, and so on that was so intense that the panel went on the full hour, and the second hour dedicated to the subject was given to Q&A. This was probably the one to see out of the whole event.

The awards were, despite their prominence on NPR, unremarkable except for the indoctrination of Joi Ito into the SxSW Hall of Fame, a couple of decent design fiction videos, and Tyson getting the “Most Influential Speaker” award, and that the red-and pink marriage equality icon winning 3 awards (Which was also heart- warming).

Bruce Sterling

But in the end Sterling put it best in saying that this year’s SxSW was about the ones who were virtually there, (Assange/Snowden), and the ones who weren’t there at all, such as the imprisoned ‘til Hell freezes over’ head of Silk Road and the inventor of the 3-D printed Liberator handgun (both Texans, by the way). Sterling even stated that the Congressman who wrote an open letter to SxSW was perhaps as interesting as anyone here, and should have made an entrance. In a way, Sterling, despite his announcement that he was going to experiment with New Media art with his Near Future Labs for the next couple years vs writing, declared that this year, the noosphere was more social than ever, and politics and people trumped any new gadget in spades this year.


This year, at ComiCon London 2103, Appshaker brought their new augmented reality character, S.E.A.N. (Series 1, Extraterrestrial Augmented reality Node). Driven by the LiveAvatar framework using Unity as its 3D engine, S.E.A.N. appears as a 5-ish foot-tall green alien that interacts live, in real time with people in the space where the S.E.A.N. environment is set up. In the video (on Vimeo), our green protagonist is seen interacting with the likes of all manner of cosplayers from a Daft-Punk like spaceman to Darth Maul. It also shows the point of view of onlookers from the mezzanine seeing S.E.A.N. through a mobile device, as well as depicting the scene on a large mobile event projection screen.

In addition to the ComiCon demo, another shows wonderful content made for National Geographic. Participants meet various animals from the pages of the magazine, most notably a cheetah. What is also quite evident here is that Appshaker has made the next generation of environmental virtual characters through the use of AR for entertainment and promotional events.

Is S.E.A.N. the first of its kind? No. While it represents an evolutionary step in virtual characters in public fora, Appshaker’s platform represents a second generation of live virtual character in real-time, and third generation in conceptual form. Virtual interactive characters are twenty years old, and Appshaker’s LiveAvatar platform realizes what might be the most flexible solution yet.

Twenty years? Third Generation? Absolutely. 1984’s Max Headroom depicted Matt Frewer as a manically challenged virtual doppelganger of reporter Edison Carter. The TV version of Max was primarily a fiberglass prosthesis, somewhere between Tron and Klaus Nomi, green-screened against a wireframe CGI cube. To understand Max’s lineage in the live character pantheon, one has only to see the online record of the utterly insane David Letterman interview with Max/Matt to understand that this form of interaction was, in the proper context, immensely compelling.

The next major example of live virtual characters in mass media is that of Leo Laporte’s “Dev Null” character on the 1996 MSNBC show, The Site. Dev was a virtual barista who fielded technical questions from the audience through banter with future CNN anchor Soledad O’Brien. What was behind the scenes was Laporte in a motion capture suit giving live answers through the Dev character using the Protozoa company’s live technology.

So, with LiveAvatar, what we see is a much more easily deployable platform (the Unity 3D engine now being an industry standard) with a much more streamlined workflow. As seen through the cases of Max Headroom and Dev Null, this is an evolutionary step, rather than revolutionary. But, with the case of AR, S.E.A.N. has the potential to be seen by anyone using the framework and app, rather than a single video feed.

Appshaker advertises their platform for promotions and events; I feel it would benefit from a partnership with the technology of holographic event producers Musion, whose technology made virtual concerts of Gorillaz, Hatsune Miku and the posthumous Tupac Shakur possible. What would it be like to be kickin’ it with Virtual Tupac or Miku rather than S.E.A.N? Here this technology could have opportunities for convergence, although Musion has largely been dependent on rendered 3D content. For Musion to successfully execute a LiveAvatar experience, it would have to hire a very good imitator of the character in question. Or, perhaps harder yet, one would have to develop a computer vision-driven AI system to drive the LiveAvatar framework

As virtual characters continue to develop in live environments, Appshaker’s LiveAvatar technology has interesting potential for intimate and large-scale events. It is a full “version” update to the developing genre of live virtual characters in public media, and I look forward to seeing it at evens such as SxSW Interactive or Augmented World Expo…


Social Collisions and Google Glass

by Howard Rheingold on October 28, 2013

Originally posted on Facebook.

Foreseeing, doing something about, social collisions with Google Glass:

It isn’t often that the designers, purveyors, and users of a technology that promises significant (positive and negative) social impacts have an opportunity to try to say something about how it can be used before it is launched. Google Glass is probably no more than months, certainly less than a year away, and other (more or less) always-on, personal, wearable, web-linked, voice commanded, camera-equipped, location-aware devices are sure to follow. What design affordances (for example, a clearly visible light when the device is recording audio, video, or still images) would ameliorate potential social collisions?

I remember when I started doing documentary video on the streets, learning that one does not bring any kind of camera into a bar: there are people in bars who shouldn’t be seen in bars, who shouldn’t be in bars at all, who are there with people they should not be with. There was a widely understand norm about cameras in bars.

A clearly visible recording light is a design affordance. An understanding that you don’t record in bars, locker rooms, toilets, is a norm.

What social collisions does Google Glass portend? What affordances and norms might help?


Last month I spoke at the Augmented World Expo in Santa Clara, CA. I met Steve Mann, James Fung, some of their colleagues, and many other pioneers in the wearable computing industry. The conference showcased the latest tech and discussed the future of AR and how different companies were bringing data to life for the enterprise. It was the best conference I have been to all year, as it was small (around 1,000 people) and full of people building things instead of speculating about the future. It went from June 4-5, 2013 at Santa Clara Convention Center.


Privacy and Surveillance

Mann gave a great talk on privacy and surveillance, urging “priveillance” as a term to use.

Stephanie's Drawing of Sousveillance and Surveillance

Aaron Parecki and I got to try out Mann’s Hydraulophone, his water-based organ invention. It played well and sounded great.

Aaron Parecki plays the Hydraulophone at AWE '13

Mann brought in a large collection of wearable computers, and wore his favorite model.

Steve Mann at Augmented World Expo '13

My favorite exhibit was the 35-year history of wearable computing.



WearComp 4

Industry and Conference History

Only a few years ago at the earliest iteration of the conference, talks focused on the future, as only a limited amount of useable technology had been built. Geoloqi has always been about freeing data from the static web and bringing it to life for the user. I’ve always considered Geoloqi tech to be “non-visual” augmented reality in that it does not require a viewfinder overlay on the world in order to access the data. Visual augmented reality is similar, but much of its potential is yet to be tapped. Many visual AR applications are related to gaming or marketing, but where the real opportunities lie is in augmenting the real world. Augmented reality or “mediated reality” is not necessarily about adding visuals to reality, but about allowing people to “see better.” Whether it is helping a ship’s captain to steer better, or helping one to see 3d mapping data, augmented reality – via either a tablet or a heads up display – is actually becoming a possibility.

Terrain Modeling #awe2013

Ivan Sutherland, founder of Virtual Reality, worked with my grandfather at the University of Utah. He imagined, as did others like Steve Mann and Thad Starner, a world in which data added to and surrounded a person – where data made better meaning in one’s life – and where the interface between a human and information was calm and seamless, mediated by tools that not only augmented the human spirit, but helped to get the right information to the right person at the right time. GIS and augmented reality systems have very similar goals! Wearable computing, mobile and augmented reality are just another interface on which to make that data more valuable.

Bruce Sterling keynoted the conference, followed by Will Wright, founder of SimCity. Sterling outlined what would need to happen for AR as an industry to take off. Right now we’re seeing the emergence of it. Once it is tied into city data and GIS it can truly bloom. At Esri we can make that happen, and I believe it’s worth a much further look. For me, this industry has been a fun hobby, as I didn’t see how much money was actually present there yet.

Steve Mann #awe2013

This year, the industry showed signs of growth in all the right areas. It’s an industry that has been aching to work for 40 years. Historic limits to this industry have been limited sensors, expensive technology, battery life, heavy equipment, long set-up time, cumbersome programming languages, and an uncertain distribution model. With the increase in sensor availability, decrease in technology prices, and widespread availability of wireless networks and cloud-based server technology, a real market is possible to grow and expand on here.

Steve Mann plays the Hydraulophone at AWE '13

{ 1 comment }

Hard Beta: The State of AR at AWE2013

by Patrick Lichty on July 7, 2013

It has been nearly a month since Amber Case, Bruce Sterling and I attended the AWE2013 expo, and taking that time has given me some time to reflect on the state of augmentation.  To put it briefly, I think that Bruce himself best said it when he replied that this is not the dawn of Augmented Reality, but 10:45 on the morning of a tumultuous summer’s day.  Having been part of the evolution of Cyborg and Virtual cultures, placing Augmentation in context with these other genres, I agree with Bruce that in my terms, AR is in a state of “hard beta”.  That is to say that AR, while solidifying into distinct forms, such as wearables (Glass, Telepathy One, Meta), geolocative (Layar, CreatAR, Metaio), and fiducial/recognition (Vuforia, Daqri, and others like LayarVision and Metaio), is still looking for its killer app.

This seemed to be the call to arms that cyberadvocate Tomi Ahonen talked about in his keynote speech.  In a near Kurzweilian display of projective figures (and, in my opinion, more convincing than Kurzweil’s Singularity), Tomi spoke of 1 billion people using some sort of device-based AR by 2020.  Using a power curve projection with an estimate of 200 million users of some sort of AR in 2013, five times that, slightly less than one eighth of the world’s population by that time, seems very reasonable.

1 BILLION users of AR by 2020


There are precedents for the success of AR, as Toni put it, “The 8th Mass Medium”, through great experiments like Esquire Magazine’s 75th Anniversary issue, and the now-pervasive use of QR codes.  As an aside, what is actually surprising is the questioning of the QR code as a “dead medium” (or at least passé).  For me, this is merely a definition of the maturation of a technology as it becomes less novel and passes into pervasiveness, like the barcode.  For example, the virtual world Second Life was the hot technology of 2006-7, but while Philip Rosedale stated at AWE 2013 that SL is staying steady at about a million users, its mass media luster has long worn off.  While this is certain to happen with AR, it is still in its ascendancy as the technology formalizes and insinuates itself into mass culture, again echoing Sterling’s assertion of his summer morning for AR.  While it is emerging, AR as a genre did not show itself at high noon at AWE2013.


Esquire AR Issue front page.

The state of maturation-in-process, or “hard” beta was evident to me at AWE2013 in that there were many discussions regarding still non-extant monetization strategies and recent repositions of major players in the AR market, like Layar’s recent refocusing on print deployments.  Other cases for this included ViewAR’s recent mix of locative and architectural recognition, and the entry of Epson into the AR arena with their introduction of relatively inexpensive Moverio glasses as a platform for wearable development (Steve Mann’s Meta platform uses this technology).  There was even an HMD bracket for the use of an iPhone as stereoscopic see-through display technology.  In contrast, conferences for more mature areas of imaging research, like SIGGRAPH, usually have an 80/20 or 90/10 percent mix of established player to startup ratio, while at AWE2013, the startup ratio is much higher, perhaps as high as 60/40.  This is merely observational, but is indicative of a genre in that late adolescence (Sterling) that is headed for maturation (Ahonen).


A key point to a genre being in adolescence is that it’s still interesting, unlike mature technologies like Second Life, which are established in their community, genre, and form factor, but hardly sexy anymore.  On the AR roulette wheel, the ball’s still in play, and while applications, form factors and design fictions abound, there is still great potential for an outlier/killer app to enter the playing field.  I’d say playfully that it might be an augmented toy like Orbotix’ Sphereo, which is a Bluetooth-controlled robotic ball that can you can overlay miniature golf courses, or even a virtual beaver in a shark suit (Sharkey the Beaver app).  While I might sound a little derisive, I found this unique platform extremely innovative, and I plan to use it as a platform for teaching the basics of robotics at the university level.  And my point about things being interesting is there are devices like Sphero that integrate innovative applications of technology in unique and playful ways, and in my humble opinion, it may be the Orbotixes that innovate as profoundly as the Googles.

That’s why AWE2013 was a blast.  I got to finally try out Google  Glass and compare where the main players in AR, like Qualcomm, Metaio, BuildAR, Layar, and others stand while getting to experiment with platforms like Sphero, Meta and others.  AR, as an adolescent medium, is still in that late beta stage where aspects of the genre are shaking things out, and watching the balls drop through the market’s pachinko machine is also really exciting.  I agree with Bruce that we are out on the veranda on that fine summer’s morning watching the early floats of the parade, and I like the way the wind’s tousling my hair.



Glass: First Impressions

by Patrick Lichty on June 28, 2013

This year, I attended the Augmented World Expo 2013, along with Fellow RA’er Amber Case, with whom I had a wonderful time.  One of the key moments of the first day was seeing her with her Glass dev unit on, and she said, “Want to try it?” Silly question!  I popped it on, and gave its functions a try, and came up with the phrase that’ll be the title of my review of AWE; “Hard Beta.” It worked wonderfully, responding when I said, “Hey, Glass! Take a Picture!” or “Take a video!” menu selection was easy, as I tilted my head and the menu rolled into place. All functioned as it should.  I have two main comments – the first about its form, and the second about how it relates to the design fictions created about it.

One Day,
 Google’s Glass initial design fiction


When I put Glass on, I was happy, but a little underwhelmed.  The best thing about it is that it WORKED, and it worked the way it was sold.  It is invisible technology, there when I need it, gone when I don’t, which is wonderful.  I wonder a little bit about the voice activation in a noisy room, but one can always tap the button to activate it in that sort of situation.  The use of the accelerometer for menu selection was also a nice touch. It fit wonderfully over my glasses (a big plus), but it also interfered a bit with my upper right quadrant peripheral vision, which is important for me because my right eye is slightly impaired, but more about that later.  I’ll tell you the root of my slight disappointment with my first Glass experience, and it comes from two sources.

Glass: How it feels video
One of Bruce (Sterling)’s current passions is that of Design Fiction, or speculative tales of future design, like this year’s Auggie-winning video, Sight,  which is about a future AR-based contact lens technology.   Design Fiction is the stock in trade on sites like Kickstarter, which show you a vision of the product many times rather than the actual prototype, which is what you’re more likely to get. The disconnect between Design Fiction and design reality is that the developers want us to be the connecting link between their vision and the coming reality.   This is not a bad thing, actually – consider the route to the iPad from the PADD device on Star Trek: The Next Generation.  The same is true for the flip phone from Captain Kirk’s communicator – Design Fiction is not a new phenomenon.  Where I have some ambivalence is in wanting the Kickstarter video to be reality, and not a call to arms for me to make the device from their platform.

Sight –
Fictional video about future AR


But consider the original Google Glass video, with its bright, glossy graphics, on the fly directions, video chats, all in full Field of View glory.  I also could not flip the display to favor my left (stronger) eye.  Don’t get me wrong, but seeing the postage-stamp HUD with my bad eye as a little bit of a downer. However I’ll give Google the benefit of the doubt because it’s a pre-release device.  Another device that has a little of this “beta disjuncture” is Steve Mann’s Meta glass video on the Meta Kickstarter page, but I find Mann, et al to be a little more forthcoming by posting real prototype tests of stereopsis, depth perception and object manipulation, and except for the much bulkier form factor, I think Mann and crew are deserving of the mantle of “Fifth Generation” glass compared to Google’s “First Generation” (as defined in Mann’s amazing plenary talk).

Meta promo video

Now, I also want to state that my criticisms are akin to splitting hairs between lobster and Kobe Beef.  It’s amazing we’re here at all, and the technologies that are now within our reach are nothing less than breathtaking game-changers.  My criticisms are more along the lines of asking what could be better in the prototypes, which is a valid part of the refinement process.  I’m really excited about the potential of Glass’ form factor getting even slimmer and going to full field of vision.  I feel like I did when I played with the video editing suite that used the real technology used by the Minority Report haptic interface.  This stuff is amazing, but Glass isn’t the only game in town, although some of the competitors are still in fictive stages.  By this, I mean models by Vuzix, Meta and Telepathy.



One of the key competitors to Google in the area of heads-up technology is that of Vuzix, one of the oldest companies in the head-mounted display business, surviving where other units like the VirtualIO and Magnavox Scuba failed.  Their current monocle, the M100, is a full-fledged untethered Android device with accompanying instrument cluster.  Whereas Glass uses its own apps, the M100 uses any Android app (200K+ to date), and the monocle flips for use in either eye.  I found the M100 a little more intrusive in its use of a spring-loaded headband.  It is also another device entirely, as it is a monocle-based Android unit and not including the voice interface.  Therefore, comparing Glass to the M100 is somewhat apples and oranges.  I’m actually more excited about Vuzix’ B2500 sunglass-like binocular smart glasses (projected at a sub $1000 price point), due sometime in 2014.

Vuzix M-100 Hands-on (from Cnet)


In between Glass and the M100 was a really sexy design prototype from Telepathy One.  I got to try on the Telepathy One at a developer party, and I was in love with the form factor, sleek and trim, although their evangelist reminded me that the actual platform might have thicker earpods for the electronics.  Like the M100, it isn’t exactly a Glass-killer (actually, none of them are), as the Telepathy One is a Linux box with a slim retinal projector, camera and connected CPU.  What I wore was a design mockup, not even a working prototype, although one apparently exists.  My only issue with the Telepathy One was that it clammed to your head through two earpods that I hope have some feed-through to the external world for a hybrid audio experience.  I was excited by Telepathy’s Alpha Revisionist fiction, and was hungry for more.  But the unit that I felt was the most exciting was the Meta glasses.

Telepathy One demo (from Cnet)

Meta-View’s glasses were the buzz of the AWE2013 expo.  With their acquisition of AR pioneer Steve Mann and partnership with Epson, Meta seems to be an initial competitor for a Glass-like vision of the world in regards to their “Design Fiction” promotional videos.  But at this point, Glass and Meta diverge sharply, as one seems much more like a computer science research platform (Meta) versus a more basic consumer appliance.  Although Meta does not currently have the voice capability and finesse of the Glass, the initial experiments with occlusion and real-world object overlay are extremely exciting, leaving me almost ready to sell a guitar to buy a development kit for $750 (half the price of Glass).  However, it has the form factor of a science lab research platform,  consisting of a fairly big set of Epson glasses and a clipped on camera/sensor array.  The exciting thing about Meta for me is the stereoptic ability and center of field display.  These features could make the Meta a game changer in itself with a slimmer form factor (which is planned for their Gen 3 glasses), although its intent is much more aimed at pervasive AR than Glass.


Having experienced so many platforms in one weekend was a lot to process, but with headset AR along with device-contextual AR (pads and phones), one can see the juggernaut on schedule for 2-3 years to widespread consumer adoption.  The most interesting thing in experiencing the various headset devices is the various intentions that each entails – Glass is clearly an extension of the company’s cloud technology, while all of the others are more stand-alone, and varying from computational headsets to full in-view augmentation of the surroundings.  While I think that Google has the most polished solution by far in the AR race, its competitive siblings suggest compelling alternate paradigms for the ‘medium,’ and it’s my belief that the time between AWE 2013 and AWE 2014 will be a crucial one for the genre.


And thanks, Amber for letting me try your Glass.  It was a brain-bender.


“One of the definitions of sanity is the ability to tell real from unreal. Soon we’ll need a new definition.”

Alvin Toffler

The reality of our technophile civilization is presently, I believe, beyond dispute, even the most ardent Luddite will find it hard to deny the almost invisible casualness with which she uses a smart phone.
But even this all-pervading ‘smartphonism’ is only a hint or perhaps an insinuation of what the cyborgization process is leading us, as a species, as a culture and as a civilization, into.

The two main concepts which seem to provide some kind of indication as to where we are headed are Situational Awareness (SA)1 and the Adjacent Possible (AP)2.
For those not yet fully familiar with situational awareness, it may be wise and maybe necessary to revise their understanding and implication of the evolution of this prevalent field of inquiry into human behavior, especially as pertains to decision making in rapidly evolving info flows.

Situational awareness, as defined by Endsley is : “the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future,” is probably the most salient at present if for nothing else that it represents the conceptualization of a person’s ‘feeling’ of one’s infocology, the absorption of said information and the correlated response.

SA as it is known, is however much more important than first appearances might suggest, the reason for that is simple enough; given that most of the information we receive from our surroundings enters our brains via our senses, the recent advances and soon to come to a retail store near you sense extensions may paradigmatically revolutionize that which we deem ‘ sense perception’ and by extension change dramatically that which we call ‘comprehension’.

When the prime paradigm of the future is ‘everything is programmable’ sooner than later a combination of augmented reality technologies, coupled with programmable genetics and synthetic biology will permit our bodies to extend their senses into domains previously inaccessible.

“We see with our brains, not with our eyes. When a blind man uses a cane he sweeps it back and forth, and has only one point, the tip, feeding him information through the skin receptors in the hand. Yet this sweeping allows him to sort out where the doorjamb is, or the chair, or distinguish a foot when he hits it, because it will give a little. Then he uses this information to guide himself to the chair to sit down. Though his hand sensors are where he gets the information and where the cane “interfaces” with him, what he perceives is not the cane’s pressure on his hand but the layout of the room: chairs, walls, feet, the three-dimensional space. The receptor surface in the hand becomes merely a relay for information, a data port. “

Prof. Paul Bach-y-Rita(3)

(image of Electronic Sensors Printed Directly on the Skin- MIT)

Though the model of SA was originally developed in the aviation industry and its military applications, it is my view that a sizeable amount of the model can and should be applied when dealing with the hyperconnected mind of the present day human, since the same kind of parameters apply even if the field of application is totally different, namely the infocology of the hyperconnected world we live in.
Moreover with the advent of the Internet of things (IOT) and the ensuing complexity, the kind of conceptualizing that until recently was sufficient for a ‘sense-making’ of the state of affairs of the world is no longer satisfactory.

“In our houses, cars, and factories, we’re surrounded by tiny, intelligent devices that capture data about how we live and what we do. Now they are beginning to talk to one another. Soon we’ll be able to choreograph them to respond to our needs, solve our problems, even save our lives.”


Robotics, Cyborgs, Augmented Reality, Synthetic Biology, Artificial Intelligence, and human enhancement technologies are only some of the fields, which are changing the ‘situation’ to which we need construct a new model of reality.

The world to which we were accustomed and through which we ‘made sense’ and constructed awareness to is no longer.

We may not realize the immediate implications of these technologies, but make no mistake, as soon as sense extensions become a widespread phenomena (and we believe that is soon) our Situational awareness will change accordingly and the theory of mind we each construct and carry will be altered irrevocably.

(CNN) — We’re in the midst of a bionic revolution, yet most of us don’t know it.

Sense extension, sense enhancement

Seeing through your skin

Take for example the ability of plants to orient themselves to light, according to Prof. Leonid Yaroslavsky of Tel-Aviv University:

“Some people have claimed that they possess the ability to see with their skin,” says Prof. Yaroslavsky. Though biologists usually dismiss the possibility, there is probably a reasonable scientific explanation for “skin vision.” Once understood, he believes, skin vision could lead to new therapies for helping the blind regain sight and even read.

Skin vision is not uncommon in nature. Plants orient themselves to light, and some animals — such as pit vipers, who use infrared vision, and reptiles, who possess skin sensors — can “see” without the use of eyes. Skin vision in humans is likely a natural atavistic ability involving light-sensitive cells in our skin connected to neuro-machinery in the body and in the brain, explains Prof. Yaroslavsky.” (Reuters)

The mind’s computational economy is continuously bombarded by the impression of the senses, from which it extracts the information necessary to compute action, behavior and decision-making. However with the advent of enriched sensorial technologies the amount of information coming from our environments changes both in quantity and quality.

This particular aspect is maybe the most important when describing the evolution of the mind, its computational economy and the resulting behavior.
The complexity of the system composed of human senses, technologies of sense extension, and the world, increases exponentially and changes the ‘state of affairs’ of ‘sense making’ and our situational awareness.
To continue the example from above if we start seeing with our skin, the amount and quality of information entering our internal informational economy will create a dramatic shift not only in perception but also in our basic worldviews.

Furthermore, a recent article in Science presents the very detailing advances in the fields of Optogenetics and Optoelectronics, the combination of which in Injectable form into human organisms promises great advances in biomedical technologies.

In neuroscience generally, and in optogenetics in particular, the ability to insert light sources, detectors, sensors, and other components into precise locations of the deep brain yields versatile and important capabilities. Here, we introduce an injectable class of cellular-scale optoelectronics that offers such features, with examples of unmatched operational modes in optogenetics, including completely wireless and programmed complex behavioral control over freely moving animals. The ability of these ultrathin, mechanically compliant, biocompatible devices to afford minimally invasive operation in the soft tissues of the mammalian brain foreshadow applications in other organ systems, with potential for broad utility in biomedical science and engineering.

ScienceMag (5)

Technological telepathy is not science fiction but science fact. Recent advances in electronically connecting rat brains, via brain machine interfaces, shows that the connectivity between brains could smooth the progress of treatment and transfigure computation.(6)

“It’s using the technology to provide something extra. It’s enhancing. It’s upgrading.”

Kevin Warwick- Engineering extra senses at Nova

There are as of now an increasing number of projects that promise to perform enhancements on our bodies that until not long ago would have appeared to be completely beyond our reach and yet, here they are. Integrating circuitry in our flesh is a revolution of the senses; bionically enhancing our bodies is a disruptive and radical transformation of our fields of perception, these in turns will transform our situational awareness and create ipso facto a different kind of human, the cybernetically enhanced human.

The CyberHuman will be able to receive information from his surroundings to an extent and quality we will find hard to accommodate. Jarringly up-to-the-minute situational awareness will require a certain novel fashion of ‘conceptualization’ through which the assimilation of innovative insights born of extended senses can become practical and useful.

The possible and potential applications of these technologies are immensely promising, whether on a personal level, the community level or indeed as a stepping stone towards a hyperconnected humanity, and a possible global brain, composed of humans and machines.

The point however of all this, is that it is changing us in unpredictable ways.

It is my view that the cybernetically enhanced human is potentially a better kind of human, stronger, healthier, more intelligent and more aware. The factuality of our lives is that we exist as highly intense linked and joint hyperconnected interactions that fuse our bodies and situation in a composite mesh of mutual influence, giving rise to an extended situational awareness in hyperconnectivity.

Infocologies backed up by extended senses will provide a novel fashion of perception of the world and a new description to the old fashion of being in the world, with the world as the world.

Cyber Jouissance

Leave it to the French to have a word, no, strike that, a kind of word, that permits a description, so wide and so far reaching, that you wish to have it ready in your linguistic arsenal.

Especially when writing about the future. Particularly when the subject matter is human enhancement and brain machine interfaces, more intensely yet, when the sub-subject of this article is pleasure and friendship in hyperconnectivity, creating advanced infocologies where thriving, both personal and societal is assured. (Which is a paradigmatic subtext of the Polytopia Project.)

A few days ago I had the pleasure of spending most of the day with two exceptional humans, which intelligence, acute insights and wide spread knowledge of the state of affairs of hyperconnectivity and the current state of our technological enhanced culture was astonishing.

Given that both of them are architects, a subject that to my embarrassment I know very little about, the structure of our interactions was very fluid and, dare I say it? Like a de-pixelization and re-pixelization of the great future that awaits us all.

I could say that what we did was implicative, I could write that a cross fertilization took place; I could also say that in a mysterious fashion, hyperconnectivity brings together minds of the same inclination.
I am certain that Carl Jung would have had a field day was he to catalogue the many instances of a shared synchronicity, of a co-evolving vision, that swiftly transformed into a seed, a potentiality for a future project.

As you obviously can read in these lines, I took great joy from this encounter, not least because of the confidence it re-instilled in my view of a positive and exciting future for humanity.

Our main subject was the future of humanity, the technological implications and most importantly perhaps, the realization that the adjacent possible is indeed within our grasp.

The adjacent possible most interesting aspect we dealt with was the future of education using embedded prosthetics of cognitive enhancement.

More importantly perhaps we decided to initiate a project of research and vision clarification concerning sense enhancement and their potential implications on human affairs.

We will soon be starting a new blog in which we will strive to comprehend and structure a more comprehensive approach and didactically inclined presentation, since education is key.

We will soon be starting a new blog in which we will strive to comprehend and structure a more comprehensive approach and didactically inclined presentation, since education is key.

In the next installment of this new project I will aim to provide the roadmap Andrea who is a computational designer and a Knowmad, Alessio who is a Biodigital architecture researcher and myself are in the process of designing.

Stay tuned.

“Education is our passport to the future, for tomorrow belongs to the people who prepare for it today.” Malcolm X

Notes and biblio:
1. Situational awareness (wiki)
1.1 For an extended read of multiple definitions of SA see: Beringer, D. B., and Hancock, P. A. (1989). Exploring situational awareness: A review and the effects of stress on rectilinear normalisation. In Proceedings of the Fifth International Symposium on Aviation Psychology(Volume 2, pp. 646-651). Columbus: Ohio State University. (Pdf)
2. The Adjacent possible: Stuart Kaufman
3. Extracted from ‘The Brain That Changes Itself’, by Norman Doidge (Penguin)- Via the Telegraph- via Good)
4.Image via : Controlling the brain, with lasers!
5. Injectable, Cellular-Scale Optoelectronics with Applications for Wireless Optogenetics
6. Pais-Vieira, M., et al. (2013). A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information. Scientific Reports, DOI: 10.1038/srep01319

Further reading:
Brain-Computer Interface Technologies in the Coming Decades (PDF)
Intel Capital Fund to Accelerate Human-Like Senses on Computing Devices

- (cross-posted at Space Collective: Cyber Jouissance, the future of cybernetically enhanced senses )


Trying Google Glass

by Amber Case on April 27, 2013


A month ago I met a friend of a friend who was testing Glass for Google. He let me try it on in the back room of a quiet pub. Aaron Parecki both experimented with its many features.

The features of Glass are not “consumptive”, as in, they don’t cause you to get away from reality. Rather, I’d call Glass’s features “active”. Think of every time you’d like to capture a moment, get driving directions, or check the time. Current technology forces one to take their phone out of a pocket to preform a task, whereas with Glass it’s right there. This is not a media device for sitting back and getting information to you. It’s a device that allows you to quickly act instead of pause and grab your device from your pocket.

Glass is a piece of Calm Technology

Glass is, by default, off. Like Mark Weiser’s words on Calm Technology, the tech is “there when you need it” and “not when you don’t”. This makes Glass a perfect example of tech that gets out of the way and lets you live your life, but springs to life when you need to access it.

Audio and Touch Input

The interface has two input types, audio and touch. You nod your head to turn the display on, then you can say “Ok Glass, search for “x”, or simply tap the side of your glass to go scroll down the menu. The real world is noisy. Having two input types is important. I suspect that Glass may have a difficult time recognizing your speech if you have a heavy accent.

Driving and Walking Directions

This feature presents directions in a calm way that leaves you attentive on the road. Transit and biking directions were not implemented when I tried Glass, but one can imagine how helpful both could be. I used to sketch out a map and tape it to the handlebars of my bike before. Being able to have an ambient understanding of where one is and where one needs to go next will be very helpful. I use the word “ambient” because it truly is ambient. It is not obscuring your vision or taking you away from reality – it is adding to it.


Video had some bugs in it still when I tried Glass, but it was a very pleasant experience to be able to quickly record something. This is the feature I think people will use least with Glass. It is ironically the feature Glass-critics are most antagonistic towards. Recording video all day from one’s Glass makes no sense. Recording special moments does. Recording significant events such as the Boston Marathon Bombing make even more sense, especially if it helps people to gather evidence for who an attacker is. Recording all the time will quickly wear out a Glass, and worse, will require a lot of editing after the fact. The Memento Lifelogger is a much better bet for all day recording, as it clusters photos taken at frequent intervals into “events”, making it easier to search through and find the information you’d like to gather.


Being able to take a quick photo was wonderful. It’s not seamless as critics might think. In the same way that all features of Glass are implemented, one must wake up the display and either verbally ask Glass to take a picture or tap the side of Glass to record the image. An external observer can easily see that a Glass is on, and like one can tell if someone is on a cellphone by the way the phone is held up to the head, one can see that Glass wearer is about to take a picture. Glass is not like a Bluetooth earpiece. There are significant signals present for one to see if a Glass wearer is using the device. I think Glass critics fear that Glass users will persistently record and take photos and no one will be able to tell whether Glass is on or not. Rest assured, most Glass users will likely be using their devices for mundane everyday tasks like way finding and reading text messages. Critics fear of Glass devices is akin to a person fearing that what they post on Blogger will be read by the entire Internet instead of being read by two of their friends and a random user coming in from search.


Glass provides one with a very well-designed and easy way to search by voice. Google results come up in a minimal format that’s easy to read on the tiny display. There’s actually an auto-summary feature that automatically summarizes the information you’ve search for. I tried the phrase “Ok, Glass, search for squirrels”, and glass gave me a summary of what squirrels were, along with images. It reminded me of a smarter, quicker version of Qwiki, a knowledge summary product that received quite a bit of attention in 2011 when it was first demoed at a startup conference in SF.


Google Glass is truly the culmination of work started by pioneers of the wearable computing movement. Thad Starner, a grad student of Steve Mann, is working directly on the project with Google, meaning that the Glass project has at least 3 decades of knowledge put into it. There have been a lot of HUDs out there that haven’t been correctly built or prototyped for everyday use, and Thad’s everyday wear lent a lot of insights into this product.

I’ll have a longer report on Glass after my time at the Google I/O conference this year. What are your thoughts on the device? Have you tried it yet? Would you wear it daily?


Embodying computation and Borging the Interface

by Patrick Lichty on April 24, 2013

Perhaps my title is a little hyperbolic, but there seem to be significant developments in regards to Human Computer Interaction happening.   I see this all over Kickstarter, but after going to SxSW, this became evident.   It seems that gestural, embodied, and ‘borged’ computing all seem to be emerging in force in the next couple years.  This makes perfect sense to me, as I remember going to the Stanford University archives on the history of computation, and reading Douglas Engelbart’s archives (remember, he did the mouse and the ‘Mother of all Demos’?).  In his notes, he chronicled going to IBM’s R&D labs where, in 1958, the mainframe division was trying to avoid the keyboard/display paradigm – remember, Engelbart would invent the mouse four years later, so no K”M”D.  They wanted to, using mainframes using 32 K of RAM use speech recognition and synthesis to communicate with the computer.

The future of embodied computation?

45 years later, we’re just starting to crack that nut.

But other interface paradigms have been tried, with more or less success.  These range from Ivan Sutherland’s Sketchpad lightpen, Lanier’s goggles and glove VR interface (can we say that the goggles are reemerging with the oddly anachronistic Oculus Rift mask?)  But as Terrence McKenna said in the radio program Virtual Paradise in the early 90’s, we need ways in which we can communicate via computers as the objective manipulators we are rather than keyboard cowboys.  The “VR Fantasy”, as he put it would be to step into a space where we could communicate directly through symbolic exchange in virtual space, which reminds me of Lanier’s fascination with South Pacific cuttlefish who communicate through manipulating their pigmentation.

Is the key Haptics and Gesture?
There are three technologies, while possessing very different methodologies for their operation that point towards the necessity for the use of gesture in computation.  Of course, the first is the Kinect, which, once hacked, has been one of the most revolutionary technologies for DIY 3D scanning, motion capture, and gestural computing.  Before, programming environments like MAX/MSP and Isadora dealt with camera blob tracking and so on.  Now, we have these self-contained devices that offload tracking computation and give a turnkey paradigm for gesture tracking.

Its little sibling, the Leap Motion, which seems a lot like a small dongle with IR emitters in it tracks fingertip and finger orientation with greater accuracy than the Kinect.  What intrigues me about the leap is that as opposed to the large gestures that technologies like the Wii or the Kinect demand, we are presented with commands that could be as subtle as a flutter of the fingers above the keyboard, a fist, or a flip of the hand.  Intuitive, gestural computing finally makes sense here.  I like what I’ve been doing with my developers’ kit a lot.

The Mother of all Demos for Haptics/Embodied Computing?
But, what seems to be McKenna’s “VR Fantasy” for gestural/haptic computing is the work being done at the MIT Tangible Media Lab and how that technology found its way into the movie, Minority Report.  In his 2009 TED talk, John Underkoffer talks about how they translated the work at the lab to the screen through the use of the Tamper system, which consists of motion-capture cameras tracking markered gloves to control multi-station, sensate media across any number of surfaces using a true 3D interface infospace, not just a recreation of a room (a la Second Life).  Using mere gestures of the glove, Underkoffer is able to alter search criteria, selectively edit media, and navigate 3D interfaces similar to those in the movie.

Is it any wonder that new laptops are beginning to be shipped with multitouch screens?  Could we project that Leaps could take over for the trackpad? Most definitely. In my media theory classes, I state flatly that the reason we use language is directly related to the fact that we are gesturing object manipulators with opposable thumbs, and to me, it seems un-shocking that computational culture has reached this point this late is because – 1: we tend to let go of familiar interfaces slowly, and 2: Moore’s Law hadn’t shrunk the necessary technology sufficiently to not have the boxes be the size of a refrigerator (and the price of an old Silicon Graphics computer).  We are animals who build objectively, even in terms of language, and eventually it makes sense that our information devices reflect our cognitive ergonomics of objective construction and gesture.  I love the idea of a computer reading body gesture.

“Borged” interfacing – the question of Glass-like infodevices.
In the beginning of this text, I mentioned Doug Engelbart going to IBM R&D in 1958 to talk to them about bypassing the keyboard and screen in lieu of natural, transparent computation.  Vuzix, Google and a host of other manufacturers are hustling to get a transparent, augmented headset solution out as soon as possible.  Steve Mann pioneered the genre long ago, and was even attacked in France in 2012 due to his appearance with his devices.  In some ways, I feel like the ‘borg-glass’ interface might be the solution to IBM’s conundrum, but unless I see some sort of sensor on the body, I find devices like Glass still too far down the ‘brain in a vat’ road, but they do make us mobile, which is intriguing.  I’d like to hear more of Amber Case’s ideas on the appropriateness to use of these devices.

So, since I visited Sundance New Horizons in 2009 and played with the Tamper system, I felt that a sea change was coming that finally recognized the body, listened to it, or even sought to merge with it.  Kinects, Leaps, Tamper, Glass – these are all the move of computation from the desktop to the body that I wrote about in 1999 (Towards a Culture of Ubiquity) before our entire environments become responsive.  But where we are with haptic interfaces represents the coming of a fundamental shift where computers are starting to become more like us, or maybe our interfaces will allow us to be more like computers.