Google Glass, the highlight of Google’s annual Developer Conference on June 27, has created waves of excitement with colorful and elaborate presentation. Google Glass is designed to be a revolutionary mobile computing unit that could potentially be pitted against other mobile gadgets including handheld cameras, smartphones and tablets. Google will be shipping the first batch of its high-tech specs in early 2013, exclusively to U.S. software developers who attended its annual developer conference. American developers who are ready to dish out $1500 per pair will also get access to Google’s research team behind the Glass. There is still no word on when and if Google Glass will be made available to general consumers.
The Google Glass prototype, called the “Explorer Edition,” was on full display in an almost four hour-long keynote presentation showcasing its features in a live show involving skydiving and flipping motorcycles, among other stunts. All stunt artists were sporting Google Glass prototypes that were recording the action from a first-person view and televising it onto a large screen in the conference room stage. Google plans to reduce interference in consumer technology experience by removing the hassle of taking out a separate mobile unit to record everyday happenings in image and video. Although the gist of the Google Glass concept hints that the specs shall provide users with information on an on-demand basis, Google did not address that topic during the conference.
It is still a mystery what features the first version of Google’s computerized specs will pack. The latest updates to Google’s “Search by Image” service might be a legitimate hint. The new features allow users to upload images from their computers, laptops and mobile devices directly into Google’s search bar. After a visual analysis of the image, Google returns with relevant images and information. The final version of the ‘Search by Image’ can be streamlined by adding written input into your search. Furthermore, Google now provides short descriptive texts about the uploaded image that appear in a box to the right of the search results. Google is confident that the updated service offers more precision in recognizing objected in the uploaded images. Admittedly, this is not an entirely original feature. “Knowledge Graph,” as Google named it when it was first introduced to the standard text search engine in May, has been transformed to make it workable for images. Combined together, these features, in many cases, might make clicking on search results unnecessary.
It would definitely be a revolutionary step if Google tweaked this feature and made it a part of the Google Glass platform. If Google could program its high-tech specs to upload real-time images and videos to Google’s ‘Search by Image’ service and return with instant information and services from Google’s search engine, the Google Glass could provide users with live information on demand. Text boxes popping up next to real life objects would make writers of Mission Impossible and Robocop blush. The Google Glass prototype is already equipped with necessary hardware, a tiny eyeball facing screen and a video camera, which could potentially carry out such functions.
Unfortunately, since Google is not commenting, these are nothing but speculations. We will find out soon enough whether our hopes and speculations will come true.