When I tell people I work on Google Search, I’m sometimes asked, "Is there any work left to be done?" The short answer is an emphatic “Yes!” There are countless challenges we're trying to solve so Google Search works better for you. Today, we’re sharing how we're addressing one many of us can identify with: having to type out many queries and perform many searches to get the answer you need.
Take this scenario: You’ve hiked Mt. Adams. Now you want to hike Mt. Fuji next fall, and you want to know what to do differently to prepare. Today, Google could help you with this, but it would take many thoughtfully considered searches — you’d have to search for the elevation of each mountain, the average temperature in the fall, difficulty of the hiking trails, the right gear to use, and more. After a number of searches, you’d eventually be able to get the answer you need.
But if you were talking to a hiking expert; you could ask one question — “what should I do differently to prepare?” You’d get a thoughtful answer that takes into account the nuances of your task at hand and guides you through the many things to consider.
This example is not unique — many of us tackle all sorts of tasks that require multiple steps with Google every day. In fact, we find that people issue eight queries on average for complex tasks like this one.
Today's search engines aren't quite sophisticated enough to answer the way an expert would. But with a new technology called Multitask Unified Model, or MUM, we're getting closer to helping you with these types of complex needs. So in the future, you’ll need fewer searches to get things done.
Helping you when there isn’t a simple answer
MUM has the potential to transform how Google helps you with complex tasks. Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful. MUM not only understands language, but also generates it. It’s trained across 75 different languages and many different tasks at once, allowing it to develop a more comprehensive understanding of information and world knowledge than previous models. And MUM is multimodal, so it understands information across text and images and, in the future, can expand to more modalities like video and audio.
Take the question about hiking Mt. Fuji: MUM could understand you’re comparing two mountains, so elevation and trail information may be relevant. It could also understand that, in the context of hiking, to “prepare” could include things like fitness training as well as finding the right gear.
from Hacker News https://ift.tt/3yoGDWH
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.