Avatars and Theory

by Alan Sondheim on March 19, 2013


Guest post by Alan Sondheim.

I started working with avatars in text-based applications such as newsgroups, email lists, MOOs, LPMUDs, and IRC. Below is a list of what I perceived as their common characters – and the relationship of those characteristics to everyday, i.e. non-CMC, life. Current notes are in brackets.


1 system resonances – what entrances and exits available – sendmail for example (from telnet 25 to configuration files), doctor for another [at this point text-based avatars were attached to applications that were not avatar-based. on the other hand the Eliza program in emacs allowed for the development of personality and dialog, and might be one of the first online avatar environments.]

2 explorations of self and fragmentations – discomforts, tremblings as totality is problematized [this tends towards issues of abjection, wounding, frisson, arousal, and death within the virtual - which might cynically be seen as nothing more than a rearrangement of digital bits.]

3 psychotic emanations – selves generating worlds, inability to return or manipulate one in relation to another [the worlds such as IRC were widely disparate and appeared autonomous; to carry Jennifer from one to another required reassertions. the same is true today, but mixed-reality work tends to blur all of this - in other wor(l)ds the environments exist in potential wells that allow tunneling.]

4 perturbations within systems – IRC or alt.jen-coolest for examples [it was, and still is, possible to perturb systems, to work at the edges of the game space, to hack and infiltrate - annihilating a performance platform at the end of a performance in Second Life, or having human performs work at the edges of a mocap space are two examples.]

5 theoretical turns – Jennifer’s ‘panties on the ground’ – desire in relation to metaphysical system building [sexual-theoretical turns, as both male and female avatars operated within fetishization and abjection, two trends that have become commonplace in virtual sexuality. what happens when an avatar is 'in tatters,' falling apart, collapsing?]

6 problematics of author and authoring – ‘deaths of authors’ [like the uncanny between real and virtual worlds, there is an uncanny between avatar creator/controller/human performer and avatar; through an analysis of projection and introjection, avatar and (presumably) human become inextricably entangled.]

7 multiculturalisms (Nikuko), sexualities (Julu), Alan and the rhetoric of innocence [multiculturalisms extend to virtual cultures and their ethnographies, but what occurs in the virtual doesn't stay in the virtual.]

8 duals and dialogs, dialectic – talker or MOO explorations (wanderings and fabrications of spaces) [these explorations have moved of course into OpenSim and Second Life, but the concepts of historiography, broken projects, avatar absences, debris, etc. remain the same.]

9 stutterings, etc. – manipulated texts – the problematic of speaking, including breakdowns of first/second/third persons [textual stuttering can blur diegesis, tense, and person; it can play off inner speech, it can speak among- or for-, it can reveal psychoanalytical debris.]

10 ontic explorations – ghosts and other emanations (the videos) – elements of disappearances, sadomasochisms, bindings and controls – the nature of writing and inscription [again in this early outline, sexuality makes an appearance. not only are ontologies blurred, but the very nature of control becomes messy and obscured in terms of agency. social media obscures and hides: think of Facebook for example as a sado-masochistic theater, with non-existent keywords and with hidden, unknown, power dispersing and controlling your self-image, and your image of other's selves.]

11 sexualities – multiples, topologies, exchanges (Nikuko), dismemberings (Julu-function), affect (Alan) [selves split, bots are everywhere, avatar bots are wonders of control and one can imagine such control as looped and continuous, eventually becoming the real/virtual landscape itself. all of this relates to the _obscene,_ which has been shown to operate differently in the brain; 'primitive' processes are called up, and language becomes threat, arousal, and other. sexualities operate everywhere in social media and virtual worlds, and the cartoon-like visuals in the latter play deeply into fantasy introjection and projection. humans are just at the beginning of understanding this, ignoring their animal and primate present.]

12 dismemberments – part-objects, splays, ruptures, s/ms, emissions [this ties quickly into the world-theater of slaughter and corruption, plant and animal extinctions, neo-liberalism and corporate enclaving: hiding the parts in relation to a simulacrum of the w/hole. in a sense this is the heart of an analysis that works with the engine of subterfuge and death.]

13 language and performativities – javascript, julu-function, julu- or jennifer-pages [after this, a bit of Visual Basic. i've used Perl for text manipulation, Emacs Lisp for reworking Eliza, scripting in Second Life, etc., but always with help; I'm not a programmer, just a kludger. but I sometimes start talks with the phenomenology of performative language; for example
k2% date
Mon Mar 18 12:40:35 EDT 2013
where a command is qualitative transformed into an unforeseen result (unforeseen in the sense that it occupies a different epistemological regime) - and how this has utterly transformed our notion of the fundamental depth and obdurate nature of analogic reality.]

14 constitutive realities – nature of the digital, Jennifer “having all the time in the world,” holarchic spaces and levels, fully-determined worldings, towards – [this get complicated - the digital as cleansed, corporate, codec/corporate-driven, capable of infinite raster, problemat- izing the truth, capable of controlling everything. in the real world, digital and analog are entangled, holarchic (tangled hierarchically). digital worlds are less and less fully-determined - too complex, too many glitches (which are seeds for other things), too leaky, too multiply- connected, too mixed with the real - they're part and parcel of everyday life, at least for humans.]

15 future seamless virtual realities, moving across cyberplains, fluid architectures and entities, etc. [this is still a goal, but instead of the analog embedded in the digital, at least for the foreseeable future, the digital is embedded in the analog - in other words, we are all avatars, and those who are relatively disconnected may well feel the wrath of our slaughter.]


New Aesthetics: Cyber-Aesthetics and Degrees of Autonomy.

by Patrick Lichty on February 23, 2013

In perusing Honor Harger’s recent missive on drone aesthetics and James Bridle’s ongoing posts of drone images at Dronestagram, taken in context with the Glitch un-conference in Chicago, some new questions have come to mind.  These questions have to do with conceptions of New Aesthetics in its various forms in terms of interaction with the program/device and its level of autonomy from the user.  In my mind, there seems to be a NA continuum from generative programs that operate under the strict criteria of the programmer to the often-autonomous actions of drones and planetary rovers.  As you can see, I am still chewing on the idea that The New Aesthetic as it seems to be defined, as encompassing all semi-autonomous aspects of ‘computer vision’.  This includes Glitch, Algorism, Drone imagery, satellite photography and face recognition, and it’s sometimes a tough nugget to swallow that resonates with me on a number of levels.  First, image-creating technological agents are far from new, as Darko Fritz recently stated in a talk that algorithms have been creating images, in my opinion, within criteria of NA since the 60’s, and pioneers like Frieder Nake, A. Michael Noll, and Roman Verostko have been exploring algorithmic agency for decades.  If we take these computer art pioneers into account, one can argue that NA has existed since the 60’s if one lumps in genres like Verostko’s ‘style’ of Algorism or the use of algorithms as aesthetic choice.  A notch along the continuum toward the ‘fire and forget’ imaging (e.g. drones) is the Glitch contingent, which is less deterministic about their methodologies of data corruption aesthetics by either running a program that corrupts the media or they perform digital vivisection and watch what little monster they’ve created.  Glitchers exhibit less control over their processes, and are much more akin to John Cage, Dada or Fluxus artists in their allowance of whimsical or chance elements in their media.

However, as we slide along the spectrum of control/autonomy from the lockstep control of code to the less deterministic aesthetics of face recognition, drone imaging, robotic cameras, Google Street View cams, Mars Rovers and satellite imaging, things get murkier.  Autonomic aesthetics remind me of the ruby-hued Terminator T500 vision generated by intelligent agents running the ‘housekeeping’ on the machine platform. I consider this continuum from Algorism to Glitch to autonomous robotic agents under an NA continuum of aesthetics is important insofar as it defines a balance of agency between the operator and the ‘tool’.  For me this is the difference between the high degree of control of the Algorist, the ‘twiddle and tweak’ sensibility of the Glitcher, and the gleaning from the database of pseudo-autonomous images created by Big Imaging created by drones and automatic imaging.  Notice I use the term ‘pseudo’ in that there are operators flying the platforms or driving the car, while the on-board agents take care of issues like pattern/face recognition and target acquisition.  We also see this in Facebook, as recent technological changes as of 2012 have introduced face recognition in the tagging of images.  From this, a key issue for me in this discussion of what began as a nebulous set of terms (the criteria of NA as defined by the global conversation) is that of agency and autonomy, and how much control the New Aestheticist gets in the execution of their process.  Another important point is that I am not calling the ‘New Aestheticist’ an artist or curator, but something in between, but I’ll get to that later as this is also an issue of control of intent.

Back to this idea of autonomy between the subject, the ‘curator’ and the viewer, what interests me is the degree of control or not that the person creating, tweaking, or gleaning the image has over the creation or contextualization of that image.  In the case of the Algorist, this is the Control end of the spectrum, where the artist takes nearly full control of the process of creation of the image, unless there is a randomization function involved in the process, and that it itself is a form of control – very Cybernetic in nature.  Agency is at a maximum here, as the artist and machine are in partnership.  Roman Verostko is a prime example of this, as he explores intricate recursive images created by ink pen plotters using paints in the pens.  What he, and the AI-driven AARON, by Harold Cohen, for that matter, are machine painting.

The next step down the autonomy spectrum would involve the use of ‘glitch’ tools and processes that distort, disturb, and warp digital media.  The process involves executing a given intervention upon the medium, such as saving it improperly, hex editing its code to corrupt it, or as Caleb Kelly writes, ‘crack’ the media.  There are differing degrees of disturbance of the media to inject chance processes into it, from a more ‘algoristic’/programmatic application of programs upon the media to directly changing the internal data structure through manipulating the information through hex code and text editors.  The resultant process is an iterative ‘tweak and test’ methodology that still involves the user in the process to varying degrees.  Of course, the direct manipulation of the data with a hex editor is the most intimate of the processes, but there is still one factor to account for.  The factor in question is that there is the set of causes and effects that are set in motion when the artist/operator opens the media and the codec (Compressor/DECompressor) mis/interprets the media, as is intended by the artist.

If we are to look at the glitch process, we can say that there is a point of intervention/disturbance upon the media, which is entirely a function of control on the part of the user. Afterwards, it is set loose into the system to allow the corruptions within the media to trigger chance/autonomous operations in its interpretation in the browser, etc. This is where the glitcher straddles the line between control and autonomy, as they manually insert noise into their media (control), then the codecs struggle with the ‘cracked’ media (autonomy). The glitcher, then, has the option to try a new iteration, thereby making the process cybernetic in nature.   In Glitch, there is a conversation between the operator, the media and the codec.  With the aesthetics created by drones, algorithmic recognition software, and satellite reconstructions, the process is far more autonomous/disjoint, and the New Aestheticist has to deal with this in the construction of their practice.

In the genre that I will call ‘mobEYEle’ imaging, the robot, satellite, or parabolic street eye abstracts from the ‘artist’, aptly turning them into an ‘aestheticist’, as their level of control is defined as that of a gleaner/pattern recognizer from the image bank of Big Data.  Rhetorically speaking, we could say that a connection between the aestheticist and the generator of the image would be less abstract if, say, a New Aestheticist were to be in the room with a drone pilot, conversing about points of interest. It is likely that a military remote pilot and a graphic designer would have sharply differing views as to what constitutes a ‘target of interest’.  Like that’s going to happen…

Therefore, let us just say that the collaboration of a New Aestheticist and a drone pilot is nightly unlikely, and that the New Aestheticist is therefore abstracted from the decisions of command and control involved in acquiring the image that eventually gets in their hands.  This, however, presents us with two levels of autonomous agency, one human and one algotrithmic. But before I expand on this, I would like to discuss my decision to call the practitioner an ‘aestheticist’ as opposed to an artist or curator.

This decision rests on what I feel is the function of the aestheticist, that is, to glean value from an image and ‘ascribe’ an aesthetic to it.  This position puts them in a murky locus between artist and curator, as they have elements of neither and both.  For example, does the drone-image NA practitioner create the image; are they the artist per se, of the image? No.  Although they are more closely aligned to curatorial practice as they collect, filter (to paraphrase Anne-Marie Schleiner), and post on tumblrs and Pinterests?  From my perspective, the role of a curator is the suggestion of taste through and informed subjectivity through ecologies of trust and legitimacy, but the social image aggregator, although they might want to perform the same function, has no guarantee of accomplishing this unless they develop a following.  Therefore, under my definition, they are neither creators nor taste-makers in the traditional sense, so what makes sense is to call them ‘aggregators’ of aesthetic material and thus my term ‘Aestheticist’.

Returning to our conversation, the drone aestheticist, then, is subject to one of two degrees of completely abstracted autonomy of the creation of the image; that of the operator or that of the algorithms operating the drone.  The abstraction surrounding the human operator is easiest to resolve, as the images of interest are either the preference of the drone operator or those created by the operator under the parameters of the mission, and not the results of a New Aestheticist’s joyride on a Global Hawk. It is merely someone else’s volition selecting the image, and a confluence of personal interest deciding as to whether the image deserves to be on the New Aestheticist’s social imaging organ.  However, it is the drone’s algorithmic image acquisition system that creates a more alien perspective in regards to aesthetics and autonomy of the image.

Compared to the Algorist or the Glitcher, all loosely placed under the banner of New Aesthetics, the Drone/Big Data Aestheticist is most problematic, as they are a fetishizer of sheer command and control operations that are potentially utterly abstracted from the pilot/driver’s volition.  This creates a double abstraction through first the pilot, and then the algorithmic recognition system. There is no cybernetic loop here at all, as the gleaning of the item of interest from the beach of Big Data is twice removed from any feedback potential.  Secondly, as I have written before, the Drone Aestheticist is exactly that, a gleaner of interesting images for use on their social image site, which in itself is a bit of an abject exercise.

Or is it?  For example, if one is to say that the Aestheticist gleaning the images does so without intent or politics, and is merely operating on fetish/interest value, then this is perhaps one of the least interesting practices in New Aesthetic practice. But on the other hand, if one looks at the work of practitioners like Jordan Crandall, Trevor Paglen, or Ricardo Dominguez, who examine the acquired image as instrument of aggression, control, and oppression, this puts a new lease on the life of the Drone Aesthetic.  In a way, though inquiry, there is an indirect feedback loop established in questioning the gaze of the device, its presence, and its function in its theater of operations.  The politics of the New Aesthetic emerges here, in asking what mechanisms of command and control guide the machine eye and determine its targets of interest.  This is of utmost importance, as the abstracted eye is guided without subjectivity or ethics and is determined solely by the parameters of its algorithms and the stated goals of its functions.

Is the aesthetic of the machine image merely a function of examining its processes, fetishizing its errors, or something else?  The criteria of the New Aesthetic attempts to talk about a spectrum of digital imaging that stretches back into time far longer than 2010, and has a problematically broad sense of definition.  Once these problems are set aside as a given, one of the key criteria for the evaluation of NA practice and the function of its images depends upon the degree of control and autonomy inherent in the process within the creation of the image.  This is formed in a continuum of control and abstraction from Algorism and Generative Art to autonomous eyes like drones and satellites. Algorism is one of the oldest NA practices, and exhibits the closest relationship between artist, machine and determinacy of digital process.  A greater degree of indeterminacy is evident in the Glitch, but the iterative process of tweaking the media and then setting it forth into the process of interpretation by the codec, foregrounds the issue of digital autonomy.

The eye of the unmanned platform abstracts creation from the human organism at least once if a human does not operate it remotely, and twice if it is.  There is the Terminator-like fear of the autonomous robot, but at this time, perhaps the more salient questions regarding what I have qualified as drone/autonomous aestheticism under NA of what the function of the image is, and is it really that interesting?  Are the practices of NA blurring artistic and curatorial practice into a conceptual aestheticism, creating a cool detachment from the image despite its source or method of creation?  Is the bottom line to the genres of NA the degree of control that the artist or aestheticist has over the image’s creation or its modality/intent?  It seems that NA is an ongoing reflection upon the continuum of control over the generation of the image, our beliefs regarding its aesthetics, and what the intentions or politics are behind the creation of the New Aesthetic image.  Or, as I have written before, are we just pinning images from Big Data and saying, “Isn’t that kinda cool?”

Maybe it’s somewhere in the middle of intention and cool.

{ 1 comment }

Archeodatalogy – Entwined, Enmeshed, Entangled

by Tyger A.C. on February 18, 2013

Entwined, Enmeshed, Entangled – Three modes of ‘being’ pertinent to our cyborgization process


By redesigning the conceptual landscape of our networked inter-relationality we may finally disentangle ourselves from the all-pervading occlusion of the cyborgization process and allow a fresh recognition of the manifold human sensorium extended in hyperconnectivity.

In the re-conceptualizing of our cyber existence we may need relinquish a few cherished objects of identity such as man machine interface, virtuality and man machine co-existence but more importantly the dualistic distinction between ‘real’ life and our virtual extensions as existence.

All of these descriptive objects of identity I suggest should become ‘naturalized’ in a new cyber-existential language.

This is the first part of a three pronged approach to what I believe is the foundation of a future philosophy of and for the hyperconnected individual.

I will try to show that these three modes of beingness are the quintessential infrastructures necessary for a future of a technological civilization aiming for the firmament of freedom and equality, personal responsibility and open access culture.

A civilization, which roots, we currently inhabit but that promises changes to the perception of ourselves, the understanding of the universe and the manner by which we may develop in tandem.

The three lines of approach that will be used are: Entwinement, Enmeshment, and Entanglement.

Each of these terms represents a similar but different manner to realize the state of affairs of hyperconnectivity as the threshold infrastructure in the process of becoming a citizen of the future, a cyborg netizen and perhaps a posthuman.

Entwinement, Enmeshment and Entanglement each represent a different level of intimacy in the infocology (see lexical index) one exists in and partakes of. The three terms offered here are suggestions for an illustrative strategy that will allow a deeper and more accurate description of the state of affairs of our cyber existence. Each of these terms will be expanded upon later, for now suffice it to say that the terms are distinguished primarily by the amount, depth and extensiveness of the connectivity between minds in the hyperconnected infosphere. Entwinement stands for the lowest level, Enmeshment for the medium level and Entanglement for the highest or deepest level.


Chance Favors the Connected Mind” (Steven Johnson)

Between our digital reputations taking hold of our old tribal systems of acknowledgement and trust and the new cyborg existentialism a tension is made manifest.

This tension that I will expound upon in a moment can be seen primarily in its hyper complex fragility and tendency to ambiguity.

The tendency for ambiguity, itself a by product of the de-potentialization of the known factors of trust moving into new realms of unknowability, increases exponentially as networked decisions are made manifest (e.g. ‘like’ clicks).

The cyborg existentialism is a new domain of relationality residing between the tribal homophily and hyperconnected heterophily.

The cyborg existentialism (CE) is a fresh approach to ‘freedom’ as the ultimate ground of human beings’ capacity to relate to the world, extended and enhanced in the world via technology.

Cyborg existentialism implies that sensory attunement via the embedding of technology in base line human bodies reveals a coherent understanding of the precedence of existence to essence. In short the idea is that the existence of the individual as an extended techno-sensory awareness mechanism belongs to a category in and of itself and should be looked at as the atom of the future (hyperconnected) cyber-civilization (see- The rise of the Cyber Unified Civilization ).

*Notes: I will use existentialism as a general kind or manner of thought and not as a systemic philosophy. Existentialism in this regard is an approach or an attitude, putting the individual sense of being as primary.

But first we need introduce a new term:



(A neologism construct from the Greek arkhaios, “ancient”+ Data- The word data is the plural of datum, neuter past participle of the Latin dare, “to give”, hence “something given” + the Greek noun λόγος (logos, “speech”, “account”, “story”).

Archeodatalogy – Noun.

Meaning: Archeodatalogy is the study and analysis of the Meta narrative emerging out of the accumulated information about an individual in a multiplicity of infocologies.

Short version: The study and analysis of emerging meta-narratives in hyperconnectivity

The premises of Archeodatalogy:

1. A hyperconnected individual ‘is’ and ‘has’ inherently a multiplicity of identities.
2. The multiplicity of identities a hyperconnected individual is made of, are manifested primarily in the infocologies this individual partakes of.
3. A hyperconnected individual then is a multiplicity of identities embedded in a multiplicity of infocologies; the coupling between these infocologies can be strong or soft, discreet or continuous.
4. A hyperconnected individual exists as a spectrum of identities correlated but not necessarily closely coupled with the fields of interests manifested as, and in, the infocologies this individual partakes in.
5. The study and analysis of a hyperconnected individual in a given infocology is the subject matter of Archeodatalogy.
6. Archeodatalogy assumes that the inter relation between a hyperconnected individual and the infocology in which she exists is a thematic environment from which emerges a particular narrative. This particular narrative is one of many such narratives, each of which represents the interrelation of the particular hyperconnected individual to a particular thematic environment or infocology.
7. Each narrative has a particular environmental theme that can be described as the story of ‘this individual in this infocology’. Each such narrative has its own characteristics and attributes and though at times might correlate and or superimpose upon another narrative, the particular narrative carries its own peculiar and idiosyncratic coherence.
8. The purpose of Archeodatalogy is to create a Meta account of the multiplicity of narratives (of a hyperconnected individual in multiple infocologies) and to allow for the emergence of a Meta story descriptive narrative, from which arrays of predictions can be summarized.
9. Archeodatalogy assumes that no particular thematic narrative can capture the totality of the hyperconnected individual, therefore only a Meta descriptive chronicle of the multiplicity of interrelations can permit a full understanding of a hyperconnected individual.
10. The results of an Archeodatalogy analysis permit a mapping of an hyperconnected individual correlated to her fields of interest that may or may not parallel this individual immediate existence, nevertheless it is the assumption of the Archeodatalogy method that a high enough approximation can be realized.

Part 1: Entanglement is an event – Enmeshment is an episode- Entwinement is circumstantial


In a state of Entwinement the correlativity of interest and mutual cross-fertilization is low to very low.
Currently the state of Entwinement is the most widespread.

A circumstantial state in hyperconnectivity can be defined as an accident of (at least initially) secondary importance in which two or more minds find themselves in the same infocology for reasons that are not necessarily pertinent and or interesting to their personal agenda (membership in the given infocology excluded)- example: one may join the network of twitter and because one twits with the hashtag of #Science he or she will be grouped in a Science list and by extension be correlated to all other minds (and possibly bots) that use this hashtag. As a consequence one may find himself being followed by a number of members of twitter and be labeled in the same fashion, namely ‘Scientist on Twitter’ or alternatively ‘Twitting about science’. This level of correlativity between the minds involved will be called here entwinement. However the level of ‘intimacy’ between these minds is (again, at least initially) practically non-existent, so though ‘Jon’ and ‘Mary’ may both be part of the infocology titled ‘ Scientist on twitter’ the amount of information that can be gleamed from this fact is very small if interesting at all.

In a state of Enmeshment the correlativity of interest and mutual cross-fertilization is medium and can be averaged.
Currently the state of Enmeshment is in the exponential increase.

To continue the same example from above then, an episode in hyperconnectivity can be defined as an extended session of interest between two or more minds that are of medium correlation such as might happen in a Google hangout or Skype chat or alternatively an extended period of loosely coupled membership in the same infocology- such as a comment section in a particular site.
An episode in hyperconnectivity will be called here ‘Enmeshment in hyperconnectivity’ and can be a single episode (as in ‘we had a few exchanges on the comment board of..”) to a multi episode connection (as in ‘ we are in continuous contact via the comment section of.. but it never extended beyond that’). The importance in understanding the enmeshment state of affairs lies with the amount of information that can be pertinent to the individuals involved. In a very wide sense the scope of possible ambient intimacy is extended beyond that of the accidental or circumstantial (as in Entwinement) and thus allowed for reciprocal influence, but did not reach a critical level of mutuality such that might exist in the state of Entanglement.

In a state of Entanglement the correlativity of interest and mutual cross-fertilization is high to very high, the difference resulting in a closely coupled relationship that may engender a relationship of extended duration. Entangled states in hyperconnectivity are currently quite rare (though in continuous increase) but offer us a glimpse into the future of inter-relationality and intersubjectivity as the web progresses and the Internet spreads globally.

A short lexical index:

Infocology: Information ecology – Basically the sum total of a particular kind or set of information, related to a particular domain of interest. Infocologies are nested and carry a given set of characteristics defined by the design and function of the infocology in question.

Infocologies stand for the ambient ecology of minds in a hyperconnected situation.

Infocologies should be considered as complex adaptive cultural contexts of hyperconnectivity in which transformative and processual properties extend the being of a particular mind

Infocologies can be seen as inter-relational spaces extending the biological autonomy of the individual mind into new forms of being manifested as the cyber-autonomy manifold.

Facebook for example is a medium size infocology nested within the larger infocology of the overall social networks infocologies of the net, themselves nested within the larger framework of the Web. (Of course also within FB there exists a continuum of nested infocologies, defined by friends or acquaintances and so on)

(It is my view that as the complexity of the web continues to increase both in size and widespread, the babushka effect reflected in nested infocologies will grow exponentially and in consequence the importance of Archeodatalogy will develop in tandem. )

Some other examples of infocologies:
The set comprised of all commentators on say CNN or the NYT current news page.
The set of all Wikipedia users as an ensemble represent an infocology.

With the advent of “anticipatory computing,” or “information gravitation”, (though I am not certain I go with these descriptive terms), the search will be gravitational and come to us no doubt about that, in this case the search itself might be reflected upon as an infocology.
Following the above the next step in the sequence per necessity will be self-mapping in hyperconnectivity. (Self-mapping in infocologies is the main tool we should get acquainted with, it is via the agency of such an activity as self-mapping that we will allow the myriad identities of our minds to carve a mind habitat on the net that fits and accommodates, our passions and our interests, our complex life. In the second episode of the rise of the cyber unified civilization asymmetry is being explored as the initial attribute of self-mapping in complex infocologies.) See: The Natural Asymmetry of infocologies.

Coming soon:

Part 2: Entanglement is a spectrum – Enmeshment is a gamut – Entwinement is particular

Part 3: Entanglement is multifaceted – Enmeshment is involved – Entwinement is exclusive

This is a work in progress and belongs to the Polytopia research projects. Please use with discretion and elegance. It is a fresh neologism meant to help us in distinguishing the next step in the evolution of hyperconnectivity. Though I do not accept the copyright idea in principle, please refer to this first paper when referring to the term Archeodatalogy. As of February 2013 search results in all major search engines has given zero results therefore I am not aware that the term exists anywhere in any fashion remotely similar to the way I present here, it is therefore my assumption that this is a first exposition of a term that I believe will be of great importance in the coming future.

Please use with elegance and grace.

Creative Commons License
Archeodatalogy – Entwined, Enmeshed, Entangled by Tyger AC is licensed under a Creative Commons Attribution 3.0 Unported License.


A Cyborg’s Story: The State of the Body in 2013

by Patrick Lichty on January 26, 2013

Since one of our topics at Reality Augmented is cyborganic experience, I thought I would reflect on my life as a cyborg in the context of another formative article on the subject.  In the halcyon 90’s, Gareth Branwyn wrote about his hip replacement surgery and his entering the cyborg ranks.  I paraphrase his original reflection in ArtByte, that talking about being a cyborg is cool until you feel the bone saw. This reminds me of the agony of Captain Picard as he was assimilated into the Borg Collective in Star Trek: The Next Generation, and the improbably unpainful removal of his implants and nanite nanobots in his bloodstream.

We’re nowhere near the sophisticated Trek level of technology yet, but we are cyborgs, and the reason I am writing this is that yesterday I had a lens upgrade in my eye; it’s online and functioning at nearly 20/40 within 36 hours of placement in the Anterior Chamber.  It was a replacement of a Posterior Chamber lens that had gotten sutures knocked out of place while jetskiing (we cyborgs are still delicate creatures). They popped out the old one, went up front, and popped in the new one, while zapping a retinal angioma on the fly. The comparison with this is that the surgery was not as easy 12 years ago when I had both of my lenses replaced (only with Bausch & Lomb, not those spiffy Tally Isham Carl Zeiss ones). The Cleveland Clinic (no, I didn’t go to Chiba) popped them right in and considering I have had juvenile rheumatoid arthritis, and just about any other eye malady known to science, the fact that the first ones went in without a hitch is near-miraculous. I feel like the movement towards the cyborganism apart from the insinuation of pervasive computing is on a fast march, and I bet that Gareth would have had a much better time of it now, and considering that hips last 10-20 years, I hope this will be the case.

I learned that our stalwart Jon Lebkowsky had the same sort of thing done a  year ago, but not quite as easily. It causes me to wonder that, as I talk to my friends from the era of happy cyborgs and cyberdelia where I started getting involved in all of this marvelous madness, so many of us have augments of one form or another.  Is this medicine, or is it a mark of our clade?  Does habitual computer use send one down to bodily augmentation a la Bruce Sterling’s Machinists from his Schismatrix Trilogy?  It makes me wonder if his alternate dream of genetically-reengineered Shapers is a little behind…  Could I have had a new lens grown and fused, allowing me to retain the ability to focus my eye?  That seems to be possible, but it seems further away than my being able to print out a lens on my Makerbot and go down to the hackerspace and get it installed.  I know that’s a furtive little rant, but it seems more likely than getting my hands on a Mr. Tissue bioreactor from my buddy Oron Catts at Symbiotica in Perth and popping out a wriggling little lens for reinsertion.

From my experience this week, I seem to come away with two nuggets of information- I feel like the Mechanists/Cyborganists are a few steps ahead of the Shapers, and the former are getting really slick at replacement of some of the parts.  However, when I hear of stem cell research and repair of dog spinal injuries using olfactory cells, I begin to wonder how long it’ll be until the Shapers catch up.  The problem is that you still can’t climb into an Autodoc (a la Prometheus) and get this done at home; maybe this will be Bre Pettis’ next project after the Makerbots, but I can only hope.  By that time, I’ll probably be a brain in a bathtub somewhere sucking nutrients up my medulla oblongata, and dealing with the frustrations of getting Slimeforge software to run right on my tissue replicatior.  Which brings me to the next point.

Secondly, a lot of the cyborganic work seems to be done on the mechanicals and the peripherals.  Lens, retinal, and cochlear implants are giving us “better than nothing” results, but it seems that either mechanical or grown upgrades to failing organs, including skin and nerve tissue are still a ways off. Moravec’s upload scenario is lost in the simple fact that brains aren’t binary Van Neumann architecture, and if we were to upload, research is suggesting that our identity and sense of cognitive cohesion is bound to our embodiment, which means if we did get an upload to work, we’d go stark raving mad because we can’t scratch our noses.  All these are flies in the ointment of our dreams from the 60’s to the 90’s.

But for now, it seems that – at least at the high end of the medical chain – maintenance is getting easier for us Mechanist-style Borgs.  And as I mentioned, I wonder whether we are in Doug Engelbart’s scenario of being in a co-evolution with technology, as use of pervasive media seems to lead to insinuation of technology into the body.  But even if that is the case, it seems to be getting easier, to the point where I wonder whether there will be a convergence of Mech-anism with my old age/brain-in-a-bathtub days. It gives me less dread about my cyborg status. Maybe Kurzweil’s right.  Maybe technology is going to turn us into shaped, Borged ur-creatures.

I just got a dose of that, and man, it’s fascinating. 


We are all familiar with the coming of the Google glasses:

Google co-founder Sergey Brin tries out Google’s new internet-connected glasses at the I/O conference in San Francisco. Photograph: Paul Sakuma/AP

At the same time competition arrives in the form of the TTP

“Developed as a prototype by TTP (The Technology Partnership), a technology development company, the glasses incorporate a tiny projector in one arm of the spectacles. The picture is then reflected from the side into the centre of the lenses, which are etched with a reflective pattern that then beams the image into the eye.

That means the image is directly incorporated into what the wearer see when looking directly ahead – unlike Google’s current incarnation of Google Glass, which puts a small video screen in the bottom right-hand corner of the right eye. That requires the wearer to look down to focus on it, taking their attention away from the view ahead.”(via The Guardian)

And then from Vusix comes the M100 smart glasses

Just as smartphones forever changed the telephone, the Vuzix smart glasses M100 redefines our interface to the ever-expanding digital world.

Vuzix smart glasses M100 is the world’s first enhanced “Hands Free” smartphone display and communications system for on-the-go data access from your Smartphone and the Internet. Running applications under the Android operating system; text, video, email, mapping, audio and all we have come to expect from smartphones is available through this wireless personal information display system. Vuzix smart glasses offer a wearable visual connection to the Cloud, through your smartphone or other compatible smart device, wherever you go.

The Vuzix smart glasses M100 includes an integrated head tracker and GPS for spacial and positional awareness and an integrated camera enables video recording and still image capture. Combined with smartphone applications and linked to the Cloud, first-person Augmented Reality will now finally be available.


And then from Lumus:

According to Lumus:

Lumus enables eyewear that is natural looking, discreet, lightweight, and portable. It permits users to watch TV, read an e-mail, or glance at stock tickers without anyone else knowing they are doing this. And it provides users with information flow without obstructing their vision, so they can carry on their day uninterrupted. All these factors are the key differentiators in Lumus products, and represent the potential and opportunity to produce mainstream consumer products. Utilizing Lumus’ patented LOE (Light-guide Optical Element) technology, they eliminate all the complaints about existing personal display solutions – too heavy, too bulky, too geeky, too cheap-looking, too uncomfortable, too immersive.

LOE technology is disruptive because it shatters the perceived laws of conventional optics. It is a technological breakthrough which combines a large, high-quality image in an incomparably compact form factor, using a transparent lens.





Not to be left behind Microsoft has it’s own Project Glass cooking in the R&D labs.
It’s an augmented reality glasses/heads-up display, that should supply you with various bits of trivia while you are watching a live event, e.g. baseball game. The device was made public via Microsoft’s patent application published today.

(H\T to Unwiredview.com)

From Sensics comes the Smart Goggles:

SmartGoggles are a unique architecture for smarter, better virtual reality goggles. Delivered as a ‘system on a module’, SmartGoggles technology provides consumer electronics companies with an engine for building goggles that customers will love to use.

These Smart Goggles by Sensics can immerse a person in a virtual environment, which behaves naturally when he moves his head. The goggles run the latest version of Google’s Android operating system.

They use that computing power to run games, and track hand motions and gestures using a camera, enabling a person to control a game or interact with a virtual world. (SmartGoggles.net and Technology Review)

Though not a competitor to AR glasses as such the German prototyping company Trivisio offers the Digital Platform Helmet V2.4

According to Trivisio:

“The Digital Platform is a complete portable computer, designed to wearing on the users head. In combination with a Head Mounted Display (HMD), this platform is absolutely independent from other devices. With the adjustable headband and the perfect balanced components it is very comfortable to wear and gives the user a hands-free operation. The low power consumption makes a fan unnecessary and therefore the Digital Platform is very silent.”


From Optinvent comes the Clear – Vu


according to Optinvent:


is a patented technology based on moulded plastic components allowing low cost see-through video eyewear (video glasses) applications. This technology allows ”see-through” mobile video or augmented reality applications and is 3D compatible.  Clear-Vu can be adopted to the form factor of a pair of normal sun glasses or eyeglasses.”

and last from French company Laster comes the Pro Mobile Display

According to Laster the Pro Mobile Display: “product is designed for professionals, and consists of a single EnhancedViewTm ocular mounted on a frame. Using it, the user can view all the displayed information relating to complex operations.

The Pro Mobile Display provides a display equivalent to a 34’’ screen viewed at a distance of 1 m and can be combined with appropriate hardware and software plug-ins for navigation, enhanced view display, or teamwork.”

(Updates thanks for the suggestions go to Marc Beuret via LinkedIn)


Some thoughts, then:

I’ll write a longer post on the consequences and implications of these augmenting technologies. However for the time being a few thoughts\questions I believe we should entertain.

Their ubiquitous manifestations notwithstanding, digital platforms are still in their infancy and yet these smart machines are already extending our minds into new realms.

These realms offer new modes of thought previously unavailable, new manners of perception not previously attainable and fresh perspectives of the world.

Wearable technologies in their different guises offer a very personal approach not only to data, information and new knowledge, but more importantly, perhaps a new kind of experience.

It is my view that these experiences, which we should pay attention to, will bring profound changes to the way our minds process ‘reality’.

In a very real sense augmented ‘reality’ machines are an extension of our minds into the world of matter via the agency of perception, infusing and at times immersing our senses with the ‘matter’ of the world.

One question pops up immediately: even if the nature of things, of the world of the universe and everything, is not, as some claim, panpsychist, aren’t we making it so via augmented reality technologies?

It might be that Joseph Campbell was more accurate than acknowledged when he stated:“There is no way you can use the word “reality” without quotation marks around it.”




Curating the Quantitative Life
The 1%, The Ambitious Middle-Class, and The Curatorial Politics of the New Aesthetic

This week (that is, the week of October 26, 2012) I heard a fantastic conversation about the ecosystem of galleries, curators and artists in the Chicago area on the local NPR station.  They laid out the sociocultural terrain so well that I had a moment of clarity, and it is that instant that I want to share with you. And yes, it also has to do with my thoughts about the New Aesthetic.  For context, as an independent curator, I had a tactical position that I would curate the shows that the mainstream institutions either did not understand or did not want to support, thus gaining the foreground.  The result of this was, in essence, a “pop-up” gallery, which is a prevalent form of curation in Chicago and many cities with extra storefront space. Therefore, what we have in Chicago now are a plethora of pop-ups, apartment galleries and the like that you can’t turn around without finding a new curatorial project or another.

The effect of this is to flatten the art world considerably – everyone’s a curator, gallerist or something.  While Jerry Saltz once said on his Facebook page that this is one of the best things to ever happened to art, it has raised the bar on competition to new levels.  What this shows is that as Jer Thorpe (visualization artist for WIRED and the NYT) recently said is that almost any interaction or consumption from donuts to cell phone usage can be placed in a “power curve”.  This is a logarithmic curve that begins near infinity at the beginning and stretches out into what Chris Anderson has called “The Long Tail”.  Bascially, there are people who talk and text 80 hours a week, and there are people who only text once or twice a week, and the distribution of usage translates into this curve.

But considering that the effect of the “new curation”, is the art world actually flatter?  Yes, because the flatness is only perceptual when you are out on the tail of the curve where the mass of “curatorial life” resides, with the tumblrs, YouTube lists, and so on.  With things like tumblr, Pinterest, et al, we live a curated life.  However, my theory of the flattening of the art world comes when you mirror the power curve/Long Tail into a asymptotic sort of spiked pyramid that shows a mass quantum noise of every day curation, the mass of pop-ups and residential spaces, web sites, etc., up the chain, a winnowed-down group of “middle-class” influential curators/galleries, and the “spike” of hyper-elite curators, artists, and designers, like Jeffrey Deitch, Hans Ulrich Obrist, Zaha Hadid, Anish Kapoor, Rem Koolhaas – you get the idea. What this is essentially a concentration of cultural capital into the upper 1% while the pyramid of fame/success is sagged down to accommodate the Long Tail.

So, what I am saying is that in the cultural era of the Long Tail, a few are concentrated into the Straylight-like crèche of capital, where the other 98% percent of the cultural world are forced to entrepreneurism, or to cultural production “for its own sake”, often scrambling from month to month between practicing one’s work and the day job down at the health club.  Or, taking it to a more extreme level, we could also say that Pinterest curated image boards are becoming curatorial quantum noise, as we swim in a sea of digital chaff.  If I were to go to the end of the Tail, we would go to 4chan.org, and since Rule 34 takes effect there in that anything on the Net shall have porn made about it, we don’t need to go there.
So, to summarize; 2012’s style of curation seems to reflect the financial paradigm – a couple percent with concentrated capital, a steep curve of established curators and producers, and then a widening saddle of aspiring producers, such as residential and pop-up galleries, widening out into a “culture of the everyday”, of massive free production which is gleaned by the social media companies as its content and curated by the masses for the effect of their own personal friend-niches.

This is the cultural model of Big Data as expressed in the art world and the curated life.

So what does this have to do with the New Aesthetic? Big Data assumes that in many cases that the power curve (the asymptotic curve generated by the Long Tail) is in effect in regards to relevance to a given question or correlation. Huh? That means that for a massive data set, only a small amount of it is really relevant to our purposes, a little more is close to what we’re looking for, and the rest steeply falls off into a sea of quantitative chaff.  Or qualitative; take your pick.

OK. New Aesthetic.

NA is largely about gleaning interesting images from Big Data, as algorithms and robot eyes spew out images by rates as high as 30 frames per second in some cases, which makes images akin to grains of sand on the aesthetic beach.  But the New Aestheticist strides upon that beach, picking out a sparkly grain of sand or even the occasional diamond, ready-cut, and places it in their bucket (Tumblr, Pinterest) to show to other people on the beach.  Pay attention that there are a lot of people on this beach, and it is a very large beach- that’s why they call it Big Data.  Lots of data; lots of sand.
The thing that I see as problematic yet historically relevant to NA as curatorial model is that it there is not much agency involved beyond the human glean from visual Big Data.  It is a cross between banal Pinterest/Flickr/Tumblr posting and Duchampian readymade; a gesture of curating the Quantitative Life. If one thinks about it, it appears that the biggest difference between NA curating and screen scraping or pattern recognition is that of human agency in terms of aesthetic picking rather than algorithmic selection. This becomes an issue, as it creates a parallel power curve in terms of human versus machine terms of the qualitative.

Before splitting curves, let us describe the stratum of curatorial space that I see NA occupying.  Remember to consider the asymptotic curve of Anderson’s Long Tail, and consider it as one of investment vs. population (or amount of data.  At the uppermost, narrow end of the spike, we have a small amount of data, a huge amount of influence and money, flaring into the middle class, all of which equates from the top curators and major museums to the top galleries down into the regional art centers and mid-grade galleries.  The next major break that appears evident is the ‘emerging’ scene, with the pop-up and young galleries, and some independent curating, as well as genre shows and higher end art blogs at the upper end.  Where the power curve begins to truly flare out is in terms of self- or social curation, beginning with the Pinterests, Tumblrs, and Flickr pages.

As a note, keep that last sentence in mind.

Then comes the flood.  Curation (sic) in the age of social media must be made to include the posting of photos and videos to social media, with the gesture, constituting the greatest number with the least investment (the function of the Long Tail’s power curve – # involved vs. degree of investiture).  By that point, curation becomes Massive Data, not just Big, and we are awash, not in a sea of kitsch, but a sea of everything, with only currents of trend to give any direction.   This lower stratum from the pin board to the Like is the beach to which I allude earlier, with New Aestheticists doing slightly more than Liking an image by taking the time to find it and put it on their Tumblr, hoping for a Like.  And in a way, as the game Foldit allowed human beings to find a protein folding solution in far less time than it takes an algorithm, so does the New Aestheticist find an ‘image of interest’ faster than a parametric equation.  It makes us feel special to categorize galaxies in a crowdsourced application, is picking images of interest in the NA exercise much different?  In some ways, I feel like it is akin to 4chan-style image boards, just more intellectual.  But with the rise of art-based Internet Surfing Clubs like NastyNets and Double Happiness in the 2000’s, the aggregation of images of interest have become a function of quantum-level curatorial practice at the base of the saddle of the Long Tail.

In addition, other effects come into play such as similarities to arousal addiction to Internet pornography.  The prime motivator for dopamine release in net.porn is novelty, based on things such as the “Coolidge Effect”, where time to climax increases with a single partner, while it stays low for varied partners.  So it is with the NA; near-infinite seas of novel images in numerous genres.  Is it possible to say that New Aestheticists as becoming addicted to Robot Eye Porn?  According to Gary Wilson, the end result to hypofrontal burnout based on Internet usage, with turning away being the ‘climax’ of net.scopophilia.  Perhaps this is a bit ‘over-blown’ to compare the two, but in my opinion, it is a matter of scale on the power curve of intensity vs. investiture.

The point of all this is that it asks what the degree that NA as curatorial practice exerts in being a function of cultural production.  Somehow, I don’t feel like I’m going to see the famous Google Earth RGB artifacted airplane blown up to wall size in the MoMA.  But, on the other hand, we are awash in the generation of images and posting them for a moment of approval, shooting the aesthetic blunderbuss, hoping a pellet/image sticks here and there.  This creates tremendous ambivalence, as the ‘potential’ effortlessness of NA practices conveys a certain pointlessness except a certain fascination with the found machine/algorithm-made object/image.  However, we can see the emergence of image boards, and the aggregation from them as art practice, and it has led us here, but perhaps NA is a form of curation for the masses, a folk curatorial practice for cyborg times.


Lowering the barriers to AR

by Amber Case on November 21, 2012

DIY VR Goggles

Augmented reality will become very interesting when the barriers to creating custom objects, animations, apps and experiences is drastically lowered. Similar to Flash or the App store, AR becomes very interesting when these experiences become very personal or shared between friends.

What has to change?

  • Tools to create AR experiences such as notifications, real-time data feeds, location-context, alerts and images need to be created and easily accessible.
  • Current tools (Layar, raw code, existing data) require large learning curves, knowledge of programming, and, at best, data parsing skills.

How do we know when we have it?

  • Once a platform exists or a program that allows for many people to create AR experiences happens, we’ll have a massive influx of horrible apps and bad designs. We’ll also have a number of unique experiences and programs that only work on AR, or take advantage of the unique aspects and shape of the AR experience. (Heads up display, machine vision, overlays, notifications, context, ect.) Some of them will be silly, others serious. But mostly silly. A lot of entertainment. Most will be short term and kitchy, but will prove a point. There will be some art involved.


  • Think of the revolution in desktop publishing. All of the sudden, mere mortals could create whatever they wanted and put it on paper. Later on this resulted in tons of horrible WordArt signs, but it also created an entire industry around Adobe Systems.
  • Flash, while considered tacky, gave rise to tons of animations, experiments in physics, strange websites and art experiments, and a whole new way of experiencing the web. Now Flash is mainly used for ads, but in the early days entire websites were devoted to fantastic animations made by younger kids. JoeCartoon, Homestarrunner. Charlie the Unicorn and others have made the leap to YouTube now, where they live on as humorous relics from the era. HTML 5 is only minimally accessible in this sense.
  • The first apps for iPhone were things like the iFart app, condensing reality into a new form and making it humorous. iFart took unique advantage of iPhone’s ability to play sounds and be mobile. If AR as a platform ends up being reasonable to use, we’ll see tech like this show up first. From virtual “kick me” notes for your friends/enemies’ backs, to adblocking / ads into art experiments, we’ll see a whole millenary of creations from the creative world. And there will be magazines/online sites/blogs devoted to showcasing these new apps and their creators. And conferences. And rockstars. And, as Bruce Sterling said in his “Dawn of the AR Industry” speech at Layar headquarters, there will be people who are defined as AR people by how they look.


  • When you can share with limited groups or public, and share with others, and subscribe to these experiences, it will be like subscribing to realities. I can imagine filters for reality, or themes, or virtual catalogs where you can be checking out furniture mapped onto your room before you buy it.

[Image: DIY VR Goggles. Image: Andrew Lim/Recombu. From Popsci.]


Elder Augmented Reality

by jeff on November 6, 2012

There’s an older gentleman who works at the Lowes near my house. He’s a fixture of the place. If you saw him walking down the street you’d either say “There goes a mountain man,” or “That guy looks like he should work at a home improvement store.” He’s a floor customer service representative, and seems as comfortable in lumber as he does in plumbing or lawn and garden. He isn’t pushy, always has an interested, kind look in his eyes. You’ll often see him explaining a pipe fitting or how to install a ceiling fan to a young couple, their eyes narrowed, their brows furrowed, nodding, furiously taking mental notes. Unfortunately, they can’t put him in their cart and take him home.

While most tasks, like installing a ceiling fan or wiring a dimmer switch, aren’t fundamentally complex, until you understand the principles they can seem arcane and risky. Lots of subject areas are like this: Computers, carpentry, construction, decorating, training your pet, arranging flowers, tracking your business expenses, creating a household budget, replacing your car’s battery, gardening, clearing a drain, hemming a skirt, the list goes on and on.

Many of these skills are taught to us by our parents, aunts and uncles, or grandparents. Some of us are lucky enough to had this introduction to a wide range of skills, or are able to call one of these experienced elders to come over when we stumble onto one we haven’t dealt with before. The less lucky of us may not have had as much time with our elders, may not have that sort of relationship with them or may not be able to call on them due to distance or passing.

I think that we have a general human need for elder advice and guidance, and I think augmented reality is going to herald a paradigm shift in serving that need.

There are a lot of people reaching retirement age around the world. A lot of them are facing the end of their planned careers whether they like to or not. They often aren’t suited to the uncertainties of the new economy, and the businesses they work for want them to step aside so younger people can take their place. Many, or even most, of them can’t afford to stop working, though, so they often end up at low paying menial jobs because they don’t have a modern skill set. They have deep knowledge and experience in a field, and they have experience explaining their field, since they often trained the generation of workers after theirs.

On the other end, there are millions of us who haven’t tackled these problems before, but will scoop up the latest gadget, are living at a very high speed, and are in love with customized, personalized, authentic experiences. We make friends with the taco truck guy, we fret about the viability of his business, and shake our heads sadly if he closes down. We want the world to work how it feels it should. Experience plus careful workmanship should equal success.

Imagine if there was a marketplace of subject matter experts. Retired or semi-retired plumbers, gardeners, electricians, mechanics, decorators, seamstresses, florists, stylists, bakers, teachers, cooks. The list could be as long as your arm. Each one of them has an iPad or a big TV and a remote (maybe both). They list their expertise and a price for their time. Maybe they fill out a profile of their work experience, ala LinkedIn.

You’re sitting at home. Suddenly the sink drain clogs, or the air conditioner stops blowing cold air, or your wife starts dropping hints about a souffle for her birthday, or your kid wants to take homemade bread to school, or you need to install a ceiling fan.

You put on your Google Glasses (or iGlasses or whatever other brand of see-through AR may exist in a year and a half), and place a quick order. You might have gotten an hour or two as a gift, maybe when you spent $500 at the home improvement store. You use a super-streamlined job-posting interface, probably speaking to it to describe your problem (lets call it a souffle), and in a few moments you have a handful of candidates who are online and available.

You hit the order button and a retired baker in some other state gets a bing-bong on their iPad. They sit down, review your profile, decide you’re a decent sort, and hit accept. You instantly see their face in the corner of your Google Glasses, and they can see what you’re seeing from the Glasses camera. They introduce themselves, you describe your problem, and you go to work together to solve it. They watch you as you work, use their iPad to draw diagrams over your vision, NFL commentator style, or shift the camera around and demonstrate with their hands.

Once the souffle’s in the oven, they recommend some pairings based on their experience, which you’ll get in a voice-transcribed recording of the interaction dropped in your email. You thank them, and say goodnight. You bookmark them for future reference, and leave some feedback.

It’s a win-win, you get access to subject matter experts and a real, authentic experience. They get to pass on their knowledge, and get paid for it. The technology only enhances an interaction that is already possible, but inconvenient.

Imagine if something like this was part of everyday life. You could gift your kids with a dozen hours when they go to college. You could pick up the basics of a new skill every month, entirely project based, no Dummies books collecting dust, just human interaction.

This seems like one of those no brainers. You mix Skype or Facetime, oDesk, retirees and the growth of home based businesses with the enabling technology of augmented reality, and this pops out. It’s not a matter of if, it’s a matter of when.

Note: This post was guest posted from Jeff Kramer’s blog.

{ Comments on this entry are closed }

Esri acquires Geoloqi

by jonl on October 17, 2012

Amber Case and Aaron Parecki of Geoloqi

Congratulations to our colleague Amber Case, whose geocoding services company Geoloqi has just been acquired by Esri, “the world’s leader in mapping technology and geographic systems, to bring powerful next-gen location and mapping technology to the mobile and web app community.”


21st Century Acamedia, Come Join Us!

by joey on September 20, 2012

The academic of the 21st century is arriving and setting up camp. Their understanding of analytics, cultural and social consumption both in the meat and jacked in has a fluidity that supersedes what was previously thought of as crude to the progression of western European academic narrative…

Augmented Reality

A new narrative has begun. Academia has consumed itself to a point of becoming a co-modification no longer holding the traditional intrinsic capitalistic value once used as a lure.

Academia has been forced to become more. More then a place to exchanged knowledge. More then a place for research. More then a need for institutional affirmation.

Academics of the 21st century currently come from a training of the 20th but with an outlook that sees beyond the book, linearity, the need for closure.

The capture of our narratives are intrinsically being replicated and distributed throughout the Network.

What is so exciting about the 21st Academic is a new journey to redefine education, learning, community and participatory spaces.

Whether it is learning to hug again, love again, program embedded devices, work in the clouds, embed yourself in a culture, society, an augmented reality has commenced to a point of reality.

These types of cybernetic work represent a transgression from the meat to being jacked in that has served as a model for the understanding of the new Academic.

Are you a new Academic? Do you see a new Academia evolving in the 21st Century?