On Tuesday at Google I/O, the search ad firm's annual developer conference, executives made the case for a world where multimodal machine learning models connect the dots and fill in the blanks.– offered previously as a Search Labs experiment – is rolling out to US search users today, with more countries to follow.
"Under the hood, our custom Gemini model acts as your AI agent using what we call multi-step reasoning," explained Reid."It breaks your bigger question down into all its parts. And it figures out which problems it needs to solve and in what order." "Building on our Gemini model, we've developed agents that can process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this for efficient recall. We've also enhanced how they sound, with a wider range of intonations.
Finally, in response to a query about its location, the agent determined that it was in the King's Cross area of London – apparently based on the view from the office window. It also remembered where the user had left their glasses – which many users might find worth the price of admission on its own.
Ai Ai Latest News, Ai Ai Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: TheRegister - 🏆 67. / 61 Read more »
Source: TIME - 🏆 93. / 53 Read more »
Google tackles phone theft with new safety features coming to AndroidThe tech giant will use AI to detect when a user’s phone has been snatched from their hands.
Source: Glasgow_Times - 🏆 76. / 59 Read more »
Google tackles phone theft with new safety features coming to AndroidThe tech giant will use AI to detect when a user’s phone has been snatched from their hands.
Source: Observer_Owl - 🏆 18. / 72 Read more »