In January, Stone Temple Consulting released a virtual assistant consumer survey showing the majority of respondents wanted the assistants to provide “answers” rather than conventional search results. Today, the firm published a follow-up study that measured the relative accuracy of the four major assistants.
It compared results of “5,000 different questions about everyday factual knowledge” on Google Home, Alexa, Siri and Cortana, using traditional Google search results as a baseline for accuracy. The following table shows the study’s top-line results.
As one might have anticipated, the Google Assistant answered more questions and was correct more often than its rivals. Cortana came in second, followed by Siri and Alexa. Of the questions it could answer, Amazon’s Alexa was the second most accurate assistant. Siri had the highest percentage of wrong answers of the four competitors. (Apple is reportedly “finalizing” its Amazon Echo competitor.)
Here’s Stone Temple Consulting’s summary of the outcome:
Google still has the clear lead in terms of overall smarts with both Google search and the Google Assistant on Google Home. Cortana is pressing quite hard to close the gap, and has made great strides in the last three years. Alexa and Siri both face the limitation of not being able to leverage a full crawl of the web to supplement their knowledge bases. It will be interesting to see how they both address that challenge.
One of the interesting observations in the report is about featured snippets. Cortana had more featured snippets integrated than any of the others, even Google Home, although Google search had more. Siri and Alexa lagged far behind in the category, although they want to use third parties to deliver “answers” and transactional capabilities.
There’s a good deal more discussion of both the results and the study’s methodology on the Stone Temple blog.
The post Report: Google Assistant bests rivals for questions answered and overall accuracy appeared first on Search Engine Land.
No comments:
Post a Comment