Google combines a large language model with an everyday robot

AI Assistant in the Real World: Google combines a large language model with an everyday robot

Image: Google

Der artikel kann nur mit aktiviertem JavaScript dargestellt werden. Bitte aktiviere JavaScript in deinem Browser and lade die Seite neu.

In the PaLM-SayCan project, Google is combining current robotic technology with advances in large language models.

Advances in large-scale AI language models so far have mostly come in our digital lives, such as translating text, generating text and images, or behind the scenes, when technology platforms use AI. linguistics to moderate content.

In the PaLM-SayCan project, various divisions of Google are now combining the company’s most advanced large-scale voice model to date with an everyday robot that may someday help around the house: a real-world assistant. But it will take a while longer.

The great language model meets the everyday robot

Google unveiled the giant AI PaLM language model in early April, attributing to the model “revolutionary capabilities” in language understanding and, in particular, in reasoning. PaLM stands for “Pathways Language Model,” making it a building block of Google’s grand strategy for next-generation AI that can efficiently handle thousands or millions of tasks.

PaLM has an understanding of cause and effect, so she can solve simple text tasks and even explain simple jokes. The group has achieved this new level of performance mainly through particularly extensive AI training: with 540 billion parameters, the model is one of the largest of its kind. The larger the model, the more variously it processes the language, according to the researchers.

Image: Google

Since 2019, Google has been doing more in-depth research on robots in conjunction with AI. At the end of 2021, the company unveiled the home robot which is now used as part of the PaLM SayCan project. He wanders around Google’s offices and can, for example, sort out trash, clean tables, move chairs, and carry items. It orientates itself with the help of artificial vision and a radar system.

PaLM can break down and prioritize activities

For the combination of voice artificial intelligence and everyday robots, Google’s search team particularly relies on PaLM’s “chain of thought”. In this process, the model interprets a statement, generates possible steps to execute the statement, and evaluates the likelihood of completing the overall task through that action. The robot performs the action rated as highest by the language model.

logo

In everyday life, the instructions to the robot could be formulated more casually, and conversations would become more natural: for example, if you ask for an energizing snack, the robot prefers to bring an energy bar, but alternatively has an apple, an artificial sugar drink with aminosulfonic acid or a lemonade in the menu.

Image: Google

A possible future is a language-based Google robot that will one day help us with daily activities in our homes. According to the research team, however, there are still many mechanical and intelligence problems to be solved before then.

The intelligent robot PaLM will therefore remain a test project in Google’s office for the time being. However, the combination of large-scale language models and robotics has “enormous potential” for future robots tailored to human needs, the project team writes.

Google shows more demo scenarios on the official website for PaLM-SayCan.

Leave a Reply

Your email address will not be published. Required fields are marked *