Google Lens is a perfect example of Google’s AI first approach.
At Google’s I/O developer conference, CEO Sundar Pichai told, “Google Lens is a set of vision based computing capabilities that can understand what you’re looking at and help you take action based on that information.” Lens is first being added to Google photos and Google assistant.
This is how lens works…
By pointing your phone at a flower, Lens will tell what species of flower it is with the help of google assistant.
Or by pointing your camera at the Router’s sticker to get the username and password, it will automatically connect you to the network.
Or by pointing your camera at the restaurant , lens will tell you all the information about the place and its ratings through the knowledge graph.
Later on, it was shown how Lens would be integrated into Google assistant. For instance, by holding your camera to a Japanese sign and tapping the lens icon and saying “what does this say”? Google assistant will translate the text for you. By clicking a pic of an event, GA will also save the event for you by clicking on “Add this to my calendar”. You can even book your tickets of that event through GA.
Pichai also showed how Google’s algorithm can enhance the photo. For instance, if you are taking a picture of a child playing a baseball through a chain like fence. Google will automatically do the hard work for you by removing the fence. Google can also enhance the picture which is taken in dim-light by reducing its pixels and making it less blurry.
Though, there was no announcement regarding the launching of the product. Google lens seems to be the most compelling technology ever launched by Google. Lens is the best way of applying practical knowledge of AI into Vision based world. Lens is yet to be launched but it seems to be promising. We all hope Google lens will live up to the expectations of AI first approach of Google.