AI

Google’s PaLM 2 vs Open AI’s GPT-4

Here’s what you need to know

In Summary

•PaLM 2 is Google’s response to Open AI recent ChatGPT-4 version and Microsoft’s integration in Bing Chat as the race to AI dominance is now at its peak.

•PaLM 2 is trained in over 100 languages and a variety of domains, such as mathematics, science, programming, and literature.

Google's new language model, PaLM2
Google's new language model, PaLM2
Image: COURTESY

The AI narrative is now greatly inclining toward who dominates the space. Earlier this month, Google unveiled Bard AI’s new engine, PaLM 2.

This is the tech giant’s new language model designed to improve language translation, “reasoning” and coding capabilities.

PaLM 2 is Google’s response to Open AI's recent ChatGPT-4 version and Microsoft’s integration in Bing Chat as the race to AI dominance is now at its peak.

Safe to say that Google is now picking up the pace after experiencing a number of issues with Bard AI.

The experimental, conversational chat service was unveiled in February but almost too quickly gained a number of controversies.

Bard AI was released with Google’s initial lightweight model of Language Model for Dialogue Applications (LaMDA).

Pathways Language Model

PaLM 2 has now replaced LaMDA and is powering 25 of Google’s products.

Notably, with PaLM 2’s advanced AI capabilities, specific models can be created and trained in different fields such as Sec-PaLM 2 and Med-PaLM2 which are familiar with cybersecurity and the medical industry, respectively.

PaLM2, which falls in a family of large language models (LLMs), is also trained to do next-word prediction, which outputs the most likely text after a prompt input by humans.

LLMs are trained on massive datasets of text and code and can be used for various tasks such as code generation, machine learning as well as Natural Language Processing (NLP).

The PaLM model follows up on the initial PaLM which Google announced in April 2022.

PaLM refers to the “Pathways Language Model,” where “Pathways” is a machine learning technique created at Google.

Similarly, in March, Open AI officially launched GPT-4 which was confirmed ahead of time by Microsoft.

As undoubtedly one of the most powerful LLMs, GPT-4 powers apps like ChatGPT.

The biggest question that resonates with most of us right now is, between PaLM 2 and GPT-4 which is superior to the other?

Also, why PaLM 2? Why GPT-4?

How is PaLM 2 different from GPT-4 and is PaLM 2 the game changer for Google in the AI race?

Size of the LLM

Just as Open AI is yet to reveal the size of GPT-4, so is Google which is yet to reveal exactly how big PaLM 2 is.

A lot of assumptions relating to the parameters are being made but feel free to go through Google’s 91-page PaLM 2 technical report.

The size of a model is measured by the number of parameters; the numerical values that determine how the model processes input and generates output.

The more parameters a model has, the more complex and powerful it is, but also the more computationally expensive and difficult to train.

Simply put, it is what the model learns.

Open AI’s GPT-3, arrived in 2021 with 175 billion parameters whereas Google’s PaLM was trained on 540 billion parameters.

However, PaLM 2 is advantageous over GPT-4 as it has smaller sizes that are specific to particular applications that lack superior onboarding power.

PaLM 2’s submodels with different sizes include Unicorn (largest), Bison, Otter and Gecko (smallest).

Speaking during the I/O developers event, Google and Alphabet CEO Sundar Pichai noted that Gecko can work on mobile devices as it is fast enough for great interactive applications on-device even when offline.

“This versatility means PaLM 2 can be fine-tuned to support entire classes of products in more ways, to help more people,” he said.

In over 100 languages

The two LLMs are trained on different data.

Data is the source of knowledge and skills for the models, and it affects their performance and generalisation ability.

The more diverse and high-quality data a model is trained on, the more versatile and accurate it is.

While Google hasn't revealed the size of PaLM 2's training dataset, the company said through its report that the model’s data set is significantly larger.

Similarly, Open AI made no assertion on the size of the training dataset when unveiling GPT-4.

PaLM 2 is trained in over 100 languages and a variety of domains, such as mathematics, science, programming, and literature.

It also uses a curated dataset that filters out low-quality or harmful text, such as spam, hate speech, or misinformation.

Capabilities

PaLM 2 also uses a technique called pathways learning, which allows it to learn from multiple sources of information and combine them in a coherent way.

This allows it to understand, generate and improve on its translation capabilities.

GPT-4 is trained on a wider variety of data than PaLM 2, covering almost all domains and languages available on the internet.

Additionally, Google has claimed that PaLM 2 has improved reasoning and logic capabilities over GPT-4.

It can solve advanced mathematical problems, explain its steps, and provide diagrams.

According to Dataconomy, it can also write and debug code in over 20 programming languages and provide documentation in multiple languages.

It can also generate natural language text for various tasks and domains, such as translation, summarization, question answering, chatbot conversation, up-to-date data, and more.

With capabilities, it refers to what LLMs can do with the text they generate.

They depend on both the size and the data of the models, as well as the tasks they are fine-tuned for.

Fine-tuning is the process of adapting a general model to a specific task or domain by training it on a smaller dataset relevant to that task or domain.

PaLM 2 has improved capabilities in logic and reasoning thanks to its broad training in those areas.

However, GPT-4 has more versatile capabilities than Google’s PaLM2, as it can generate natural language text for almost any task.

Some of them include translation, summarization, answering questions, text completion, generation, analysis, extraction and even text paraphrasing.

Google’s LLM can be accessed through its bot, Bard AI whereas Open AI’s can be accessed through ChatGPT.

It is worth noting that Bard AI is freely available across 180 countries, where PaLM 2 is not locked behind an API while GPT-4 is behind a paywall in ChatGPT plus.

Free users only get access to GPT 3.5. This doesn’t mean that users can’t still access GPT-4 for free.

Through Microsoft Bing AI Chat, now available without a waitlist, anyone with a Microsoft account can freely access ChatGPT’s GPT-4 upgrade.

However, in mid-May,  Microsoft finally did away with the mandatory sign-in.

Highly efficient

Choosing which LLM to go with is completely up to you. If you need one that is strong with logic and reasoning and has a “Google it” section then PaLM 2 it is.

If you need one that has proved itself in generating text and is fast while at it then look no further than GPT-4.

In the AI scene, Google has proven that it is yet to back down on this battle as during its I/O 2023 developer event, the tech giant said they are working on Gemini, a multimodal designed to be highly efficient at the tool and API integrations.

Pichai mentioned that it will be built to enable future innovations like memory and planning.

“Gemini is still in training, but it’s already exhibiting multimodal capabilities never before seen in prior models,” Pichai said.

“Once fine-tuned and rigorously tested for safety, Gemini will be available at various sizes and capabilities, just like PaLM 2, to ensure it can be deployed across different products, applications, and devices for everyone’s benefit.”

The only way to find out which LLM works out for you is to try them.

WATCH: The latest videos from the Star