In today’s Google I/O 2017 keynote, the company is touting its advancements in machine learning and artificial intelligence — one of which is a new visual search tool called Google Lens.
Google CEO Sundar Pichai described Lens as “a set of vision-based computing capabilities that can understand what you’re looking at and help you take action.” The tool, he said, will initially be available in Google Assistant and Google Photos (both of which received several other updates, too).
In one example, Pichai showed Lens being able to identify a flower from a smartphone’s camera and offer additional information on the flower like you’d find in a Knowledge Panel. In another, he took a photo of a restaurant and Lens was able to pull up business details like you’d find via a Google Maps search — phone number, star ratings and more. Later in the keynote, Google’s Scott Huffman showed Lens working in tandem with Google Assistant. After taking a photo of a theater/club marquee showing an upcoming performance, Lens and Assistant were able to identify the band listed on the marquee and offer an option to buy tickets to the show on Ticketmaster.
On first glance, Lens is reminiscent of a 2009 technology called Google Goggles that offered the ability to do searches based on a smartphone photo. But Goggles was mostly just for identifying something; Lens has the ability to not only identify what’s in the photo, but also to give the added context of, for example, a restaurant’s phone number, its rating score and so forth (as shown in the GIF below that Google tweeted during the keynote).
We’ll have much more coverage coming soon from Google I/O. You an also catch up via our Google I/O 2017 keynote live blog on Marketing Land.
The post Google announces Lens, an AI-powered visual search tool appeared first on Search Engine Land.
No comments:
Post a Comment