Post 319 – by Gautam Shah


Bienal de La Habana, Cuba 2012

Augmented Reality is enhancing or supplementing one’s current perception of reality. This is in contrast to virtual reality where the real world is replaced with a simulated one. Both have digital connection, today but even without it, they have been part of our experience in various measures for ages. Augmented reality (mainly with digital media) has its origins as early as the 1950s, and has progressed with virtual reality since then, but it’s most significant advanced have been since the mid 1990s.


Realities have been augmented by altering the perception capacities through consumption of certain substances. The alteration was for both, dulling or diffusing and to enhance the perceptive faculties. Pain, diffusers, inducers, enhancer and bearers have been known. These augmentations were not rational or consistently predictable. Virtual reality was used as part of magic ceremonies, in religion and entertainment. Simulations were enforced through light and sound, as well as sleight of hand.

Up at the Mostra

In earlier days the play was interpreted by the interpreter or Sutradhar (conductor in Sanskrit). It could be simplistic language translation, elaboration of complex philosophical content, or bridging of time elements. These interventions augmented the reality being enacted, by compacting the time-space. In the bi-scope or silent movie era, the story and music were played live. Foreign language movies, TV plays, programmes and presentations, carry sub titles for translated dialogues or audio, video and textual augmentative effects.

Multiple tickers

Nominally augmentation occurs in real-time, and in one of the two basic frames, the context is rational or literal. It has till now a distinctive identity, where the additional information about the environment and its objects is overlaid or under-laid with reference to the base frame. But this differentiation is likely to diminish in near future.


The augmented reality is going a step further by including zoom-in and out effects to show respectively details and overall perspective views. This is further augmented by use of wider scope and panoramic views. The usual experience with glass-based lenses, of the differential clarity between foreground and background can be eliminated with use of charged couple devices.5708231997_e713354ea8_z


Variety of devices, such as mobiles, i-pads, computers, wrist watches, etc. use computer-generated sounds, graphics or video clips for additional information about products, spaces and places. Currently these are the compilations as offered by the device manufacturer, or application providers. Many of these manifest as customized offers, but none recognizes the changing needs or moods. Artificial intelligence will automatically figure out the behaviour of the subject (the user), and accordingly augment the experience of reality.


A person may not dwell in a real world all the time. One occasionally needs to visit the virtual or simulated domain, like architectural 3D renderings, and see how it functions with the augmented reality. Here the virtual reality is augmented with all the sensorial experiences. Typical of this are the echoes and reverberation effects as one walks through the rendered space. This may not come first to architecture, but has begun to enter the games, sports and other learning simulators. The subject gets the vibrations, shocks, and other touch-feel effects. In medical surgeries a surgeon, can practice the procedure, as if on a live being rather then on a cadaver (dead body).


The chief sensorial experience that constructs reality is the visual perception. A smart eye glass or contact lens can be overlaid with not only textual and graphics information, but can ‘scope’ the view by selective zoom-in-out. It can also have night vision or selective spectrum vision. Artificial Intelligence will be able to prejudge the nature of support required by the subject, and tailor the augmented reality.


It is expected that augmented reality and virtual reality will converge. It will come as soon as when an interface begins to interact with our perception faculties.