The year is 2025 and there’s a raging snow storm outside. The world is a pale shade of white and gray. You wake up and instinctively look around the bedroom to locate the amber dot glowing on your G-Glass iteration #4 (4th generation upgrade) visor.
Your friends envy you. They may have a more feature-packed visor, some even have Wizers (visors with AI built-in). They talk excitedly about how the narrow-AI in their current gen Wizers is going to be upgraded to AGI but you just smile to yourself, wear your G-Glass#4 and sink back into your pillow, looking out the window – the blank canvas of the world waits to be painted by your imagination.
The amber dot is glowing and you do a slow blink with both your eyes. The in-ward facing camera reads your eye gesture and opens the note.
Hey Dan – Join us at the beach? Co-ords are N 39° 1’12.2664, E 1° 28’55.7646.
You know the place like you know a familiar telephone number, even before the spinning Digital Globe zooms in and fills your G-Glass#4′s field of view. You have tele-traveled to a secluded, clothing optional beach cove in Ibiza. You look around and you spot 5 of your friends already there. Two are present in the same way you are, as Dirrogates – Digital Surrogates. The other 3 are in person; the “naturals”.
You can’t feel the sun or the sand in the same way the “naturals” can, but that doesn’t matter, the meta-data from the location has already streamed through and has seamlessly adjusted the air-conditioning in your bedroom and you have the option of turning off the sun-tan lamp and the ambient sound of the surf and waves lapping at the beach. Advanced frequency filtration reduces the volume of the high-pitched sounds of the sea birds so you can hear your friends better.
This is why you are the envy of some of your friends who own other Visors – They pay a premium to tele-travel in the Surrogate Reality world that Google painstakingly built over the previous decades.
Mesh-net is born:
The first version of this Surrogate Metaverse was simply called Google Earth, and it’s imagery and topography was updated only every few months or so… Until the launch of Mesh-net – that changed everything. Now 3D meshes of the real world could be created at near infinite resolution and inEuclideon like detail, in real-time, and be pin-registered to it’s real world counterpart.
The high resolution eyes-in-the-sky did much of the heavy lifting of capturing and draping live video over the real time 3D geometry. The advanced GGlass#4, thanks to it’s stereoscopic cameras and depth-from-stereo algorithms, could produce ground level meshes of everything in the field of view of a wearer, including live meshes of people, draped with video of themselves – living organic paint.
Initially, only the military had access to the technology, but with state governments and a growing number of allies passing unanimous resolutions to ward of terrorism, citizens had waived off their privacy concerns and rights, which then opened the door to wide-spread commercialization and the birth of Mesh-net.
Mixed Reality…to Surrogate Reality – The hard science:
Every one is now aware of Augmented Reality. This is the next big media Technology marvel. While previously it resided only in University research labs, the emergence of cost effective hardware such as visors from Vuzix, Meta-Glass, Google Glass and even the I-Phone and Android based phones, with sophisticated hardware such as digital compasses and accelerometers has allowed Augmented Reality applications to be introduced to the public. One excited market is the Ad Industry and Marketing agencies that have helped popularize this technology.
Augmented Reality thus far has come to mean the overlaying of Computer generated imagery (CGI) in perfect registration and orientation with a live view (usually via the built in camera of a device) of the real world. In most instances people are now familiar with turning on their built in smart-phone camera, and have an AR application run, that superimposes useful information in context to the live camera view shown on the display. Examples range from a list of nearby fast food outlets, to geo tagged “tweets” from Twitter. Applications like Layar also allow the overlaying of 3D objects of interest.
If we could use the concept of Augmented Reality, mixed with Virtual Reality, in context with a town, city or even the world… we could in effect, be sitting at home and tele-traveling via a digital representation of ourselves, our ‘Dirrogates’, in a superimposed world, while “interacting” with real people in the real world. This would be Mixed Reality, or Surrogate Reality.
Google Powered Surrogate Reality:
How could Google power such a Surrogate Reality experience? Google already has a Virtual version of not only every street, town and city… It has a virtual copy of the entire planet in the form of Google Earth. With Google Earth you can zoom in and spin a “Digital Globe” on a PC to visit a geographically correct and accurate digital representation of any city or country in the world down to street level. A rich overlay map shows an aerial satellite image of this location draped on the digital globe.
The Digital Globe from Google also has many ‘layers’ that can be turned on and off, such as traffic information, locations of civic services etc. One of the most important layers is the 3D buildings layer. This allows whole streets and city blocks to be represented in Google Earth as 3D models that can be navigated in real time. All these layers are geo referenced and located at exact real world co-ordinates. So for instance if we create a 3D building and a community park or playground and place it at the exact co-ordinates as in the real world, we would have a location that we can navigate to virtually, and be at the same place as someone who may be there in real life (IRL)!
Another example, is if we create a replica 3d model of a supermarket or a bookstore that exists in real life and place this 3D model at the exact same co-ordinates on the Digital Globe, we could “fly” to this virtual model and “enter” the building as shown in the video above.
Case Study – Social Networking in a Surrogate Reality World:
How does this all tie in and what are the benefits of mixing Virtual Reality and Augmented Reality? Take the case of the community park as mentioned above. If we have a virtual model of such a community park, and if Google Earth adds an “avatar” layer we could have a Second Life type socializing and collaboration platform. Using Internet chat, voice and webcam we could start a social interaction with another avatar that might be at the same location.
In-fact Google had done an experiment with such a virtual world with their project, Lively which has since been closed. If you thought this would be an interesting way to meet different people from different parts of the world, via a very intuitive interface (much better way than navigating in Second Life), imagine if you could interact with REAL PEOPLE who may be present at that real world location! Your Digital Surrogate would be socializing with a real human, driven by you in the comfort of your armchair.
Surrogate interaction with Humans:
So how would your Surrogate avatar interact with Real People? This is where Augmented Reality comes in.
An excerpt from the hard science novel “Memories with Maya” explains…
The prof was paying rapt attention to everything Krish had to say. “I laser scanned the playground and the food-court. The entire campus is a low rez 3D model,” he said. “Dan can see us move around in the virtual world because my position updates. The front camera’s video stream is also mapped to my avatar’s face, so he can see my expressions.”
“Now all we do is not render the virtual buildings, but instead, keep Daniel’s avatar and replace it with the real-world view coming in through the phone’s camera,” explained Krish.
“Hmm… so you also do away with render overhead and possibly conserve battery life?” the prof asked.
“Correct. Using GPS, camera and marker-less tracking algorithms, we can update our position in the virtual world and sync Dan’s avatar with our world.”
“And we haven’t even talked about how AI can enhance this,” I said.
I walked a few steps away from them, counting as I went.
“We can either follow Dan or a few steps more and contact will be broken. This way in a social scenario, virtual people can interact with humans in the real world,” Krish said. I was nearing the personal space out of range warning.
“Wait up, Dan,” Krish called.
I stopped. He and the prof caught up.
“Here’s how we establish contact,” Krish said. He touched my avatar on the screen. I raised my hand in a high-five gesture.
“So only humans can initiate contact with these virtual people?” asked the prof.
“Humans are always in control,” I said.
Radius of ‘Contact’; Human and Surrogate:
So, a human present at a real world location, can, via his/her smart-phone or visor, locate and interact with a “Surrogate”. Social Interaction will take place just as it does today; The surrogate hears the human who speaks thru the visor’s mic, and the human hears the Surrogate via the speaker or headphones. The human, of course is at all times “seeing” the Surrogate via the augmented view of the see through visor. This way the human can know where the Surrogate is. Communication can be initiated and maintained only while the Surrogate is within a certain radius-of-contact of the visor/smart-phone, and by the user then “clicking” or gesturing to initiate contact with the Surrogate.
On establishing contact with the Surrogate to start social interaction, a 3D avatar of the human can then “spawn” in the superimposed world so that the Surrogate can have visual confirmation or “see” the human. The accelerometer and compass in the smart phone or visor can keep track of the movement of the human in the real world and thus update the position of the human’s avatar. It is then up to either the human or Surrogate to stay in one place or move around and be followed by the other, in order to maintain the radius of contact for social interaction in their Digital Personal Space.
In the video above, a Surrogate is shown in a real world location as he would appear when viewed through the display of a Smart Phone or visor. By 2025, there is every reason to realistically predict that the “Surrogate” will be of sufficiently high quality and humans (growing up with video games and lifelike anime) will get over the “uncanny valley” effect plaguing today’s generation of CGI and human minds.
The future of Mesh-net and Dirrogate Reality:
Competitors tried back in 2015 and still try today, to build something as sophisticated as Mesh-Net – an attempt to create parallel universes, and recently there have been attempts to hack into Mesh-net itself. Tele-traveling had a serious impact on transportation and there were many job losses that had a knock on effect on related businesses. But, these incidents are dropping, as humans finally see the logic of living in a border-less world, where resources are in abundance, thanks to advances in technology that allowed for large scale desalination of water, near earth-body and asteroid mining and dying out of war mongering human minds.
The essay is to seed ideas, and is not to be interpreted in any way as endorsement or involvement from Google or other organizations mentioned. All existing trademarks and copyrights are acknowledged and respected as belonging to their respective owners.
The concepts in this essay are from the novel “Memories with Maya” by Clyde DeSouza.