• Google uses PaLM2 to power 25 products including its conversational AI assistant, Bard.
• PaLM2, which falls in a family of large language models (LLMs), is trained to do next-word prediction, which outputs the most likely text after a prompt input by humans.
Google has unveiled PaLM2, a new language model designed to improve language translation, “reasoning” and coding capabilities.
During the Google I/O 2023 event, the tech giant confirmed that it already uses PaLM2 to power 25 products including its conversational AI assistant, Bard.
It also powers Med-PaLM2, a medical competency model that can answer questions and summarise insights from dense medical texts.
Google said the model is heavily trained on multilingual text, demonstrating advanced proficiency in logic, common sense “reasoning” and mathematics.
“PaLM2 was trained on publicly available source code datasets, making it more efficient and faster than previous models,” the company said.
PaLM2, which falls in a family of large language models (LLMs), was also trained to do next-word prediction, which outputs the most likely text after a prompt input by humans.
This model follows up on the initial PaLM which Google announced in April 2022.
PaLM refers to the “Pathways Language Model,” where “Pathways” is a machine learning technique created at Google.
Speaking during the event, Google CEO Sundar Pichai said PaLM2 comes in four sizes namely; Gecko, Otter, Bison and Unicorn.
Gecko is the smallest size and, according to Google, can work on mobile devices as it is fast enough for great interactive applications on-device even when offline.
“This versatility means PaLM 2 can be fine-tuned to support entire classes of products in more ways, to help more people,” the tech giant said.
Google DeepMind VP Zoubin Ghahramani said PaLM2 is trained on the multilingual text that spans more than 100 languages.
“This has significantly improved its ability to understand, generate, and translate nuanced text such as idioms, poems, and riddles. This is across a wide variety of languages which is a hard problem to solve,” Ghahramani said.
“PaLM 2 also passes advanced language proficiency exams at the “mastery” level.”
He also said the model’s wide-ranging dataset includes scientific papers and web pages that contain mathematical expressions.
This allows it to demonstrate improved capabilities in logic, common sense “reasoning” and mathematics.
Since March, Google has been previewing the PaLM API with a small group of developers.
To use PaLM2, developers can sign up to use the model and Google users in the US as well as 180 other countries can try the model as part of Google Bard.
Google noted that customers can also use PaLM2 in Vertex AI with enterprise-grade privacy, security and governance.
Pichai said Google is committed to releasing the most helpful and responsible AI tools as it works to create the best foundation models yet for the tech giant.
“Our Brain and DeepMind research teams have achieved many defining moments in AI over the last decade, and we’re bringing together these two world-class teams into a single unit, to continue to accelerate our progress,” he said.
He further mentioned that the company is working on Gemini, a multimodal designed to be highly efficient at the tool and API integrations.
It will also be built to enable future innovations like memory and planning.
“Gemini is still in training, but it’s already exhibiting multimodal capabilities never before seen in prior models,” Pichai said.
“Once fine-tuned and rigorously tested for safety, Gemini will be available at various sizes and capabilities, just like PaLM 2, to ensure it can be deployed across different products, applications, and devices for everyone’s benefit.”