Google announces new visual features on Search and Lens

Using the camera, computer vision, and augmented reality, Google will be rolling out new features for Lens and Search, which will help users gather information in a more engaging and unique way. Starting later this month, Google Search users will be given an option to view items in 3D and AR, by clicking on the Knowledge Panel option.

“We’re also working with partners like NASA, New Balance, Samsung, Target, Visible Body, Volvo, Wayfair and more to surface their own content in Search,” said Aparna Chennapragada, VP, Google Lens and AR. “So, whether you’re studying human anatomy in school or shopping for a pair of sneakers, you’ll be able to interact with 3D models and put them into the real world, right from Search.”

Google Lens already uses machine learning and computer vision to answer questions and give information related to a picture. Lens will now be able to snap a picture of a restaurant menu, identify each specific item, and show a preview of that finished dish. Google Lens will also be able to translate over 100 languages automatically, as long as the user holds their camera over the original wording.

“Were also working on other ways to connect helpful digital information to things in the physical world. For example, at the de Young Museum in San Francisco, you can use Lens to see hidden stories about the paintings, directly from the museum’s curators beginning next month,” continued Chennapragada. “Or if you see a dish you’d like to cook in an upcoming issue of Bon Appetit magazine, you’ll be able to point your camera at a recipe and have the page come to life and show you exactly how to make it.”

Lens will soon be able to help people who struggle with literacy to read signs and forms. By hovering the phone camera over any text, Google Lens can now read the words out loud, and will highlight each word read to keep the user on track. This feature is first launching in Google Go, and will be available on more devices later.

Array