is one of the leading augmented reality, (AR), browsers currently available. AR browsers use the camera on your mobile device to recognize locations, images and natural features and overlay additional contextual information onto the live feed from your camera. A few days ago, Junaio, released a feature called SCAN. SCAN builds off of the image and natural feature recognition capabilities already in the browser. SCAN recognizes any augmented data, including natural features, images, QR codes, and bar codes, as long as it is in the Junaio database.
This innovation moves us one step closer to wide spread adoption of AR technology in day-to-day activities, and here is why.
One thing we have seen is that a few seconds, or even milliseconds can make all the difference on whether a technology gains broad adoption. For example, the tablet is always on. You can be up and surfing the web in a few seconds. Compare this to a laptop, where even in sleep mode it can take tens of seconds and potentially minutes, (depending on how long it has been since you last did a clean install of windows ;)
Given the choice between tablet or laptop, invariably the tablet wins out. Not because of it's screen size or easy to use keyboard, or lack of support for Flash, (I am using an iPad), but because it is fast and convenient. It is not uncommon for my laptop to go unused for a week at a time.
Currently image recognition with AR requires that you take a picture of something, (think Google Goggles), and then it compares that picture to similar pictures in a database. Imagine the same procedure, (in the not to distant future), but instead of requiring you to take a picture, your device is always on and in "search mode", processing images, and natural features around you, along with your geo-location and even the identity of those around you.
This alone would be information overload, so it would require a filter. This filter would need to be dynamic, and take into consideration your location, day of week, and time of day, personal productivity habits, historical preferences, possibly even bio-metric information such as heartbeat and body temperature, and serve up hyper-contextual information just for you.
It would recognize when you were at work crunching on a project, and would facilitate access to relevant information and people, and de-emphasize non-relevant interruptions, creating a hive of focused activity. This AR of the future will recognize specific images and natural features unique to ones working environment, and provide always-on performance support calibrated to ones unique personal AR profile.
Consider such a system for training. Using the same personalized filter, only calibrated for guiding a student through a series of exercises that build off one another and dynamically adjust based on the performance, and biometrics of the student. For example practice giving a presentation in front of a virtual crowd, or developing your negotiation skills.
If this seems far-fetched, the technology is already here, and is rapidly improving. On the software side, tracking technology, (the ability for an AR application to recognize and differentiate natural features), is getting increasingly more capable of seeing thousands of points in real-time. Tablet and smart phone manufacturers are incorporating faster processors and graphics capabilities. We are just scratching the surface of high speed bandwidth here in the United States, (Japan has 12x the average advertised bandwidth as the United States). Finally, different ways of visualizing information, beyond the confines of a mobile device, are coming to market. The "Light Touch"
projector by Light Blue Optics turns any surface into a touch screen. Eye ware from companies like Vuzix, in which an augmented reality image is projected in front of you, are commercially available now. Augmented reality contact lenses are in development.
Always on, multiple image recognition is one piece of the AR puzzle that will fundamentally change the speed, access, and personalized nature of information in the future.