advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Google I/O was all about flexing the company’s AI muscles

In an open-air venue akin to a sports stadium, Google CEO Sundar Pichai addressed a gathering of developers and the media at yesterday evening’s Google I/O event.

“As you may have heard, AI has had a very busy year,” the executive began, a humourous understatement opening a keynote meant to show that when it comes to AI, Google means serious business. At last year’s I/O keynote, Pichai ran down improvements to company staples, such as Google Translate, Maps, YouTube and new services like Google’s flood watcher.

This year was about its AI products almost exclusively, denoting the importance that Google, and by extension, Big Tech is placing on generative artificial intelligence, and most importantly, on the money it can make them.

Google joined the race for generative AI late. Microsoft had already spent billions in OpenAI’s ChatGPT by the time the Googleplex began furiously moulding its existing experimental large language model LaMDA into something resembling a chatbot.

Now Bing has a fully integrated chatbot to bolster its search functions, and Microsoft is looking to introduce ChatGPT-like responses into its other products, like Skype and those found in Microsoft 365.

Unlike its Google rival, Bing also enjoyed a wider rollout and was available in South Africa long before Bard.

At I/O, Google was playing catch up, and they wanted to make it big for investors. Not only did they show off available and upcoming products and services powered by the company’s generative AI model, but also future services, and new and improved models that won’t see the light of day for some time yet.

The entire first portion of the keynote had an air of “Not only are we back in this race, but we will win the war even if we lost the first battle.”

There was also a feeling that the company was moving away from the mistakes of the past. There was no mention of LaMDA, with which the company first built its Bard chatbot.

Instead, the new hotness is called PaLM 2 – somehow already in the second iteration when we have never even heard of or seen PaLM 1 – a “next-generation language model designed to improve language translation, reasoning, and coding capabilities,” the company explains in a press release.

This model has apparently been heavily trained on multilingual text and is claimed to already be demonstrating advanced proficiencies in “logic, common sense reasoning and mathematics.”

It will come in four “sizes” depending on what it is needed, what applications it is running and what devices it is using, and it will be Google’s hero generative language model going forward. Pichai said it will power 25 new Google products and features, including Bard and a medical competency model called Med-PaLM 2.

With PaLM 2 replacing LaMDA in Bard, the search engine-bound chatbot will now feature more languages like Japanese and Korean and has been released in 180 countries, now finally including South Africa.

“As the platform expands, Google will focus on maintaining high standards for quality, local nuances and adherence to AI principles,” Google says.

Bard will be implemented into the tech giant’s suite of apps, including Gmail, Docs, Drive, Maps and more. Its integrative properties will help users compose emails. Bard will also suggest responses for Google’s Messages app when you need help, for example, asking a person on a date.

“Creative Compose, a new Messages by Google feature powered by generative AI, can help you add an extra spark of personality to your conversations. The feature offers suggested responses based on the context of your messages and can even transform your writing into different styles.”

But what about Search? The implementation of a generative AI chatbot into search bars was a major turning point for Microsoft and Bing. Google’s own foray into this is still in the testing phase with Search Labs.

You can sign up now to see it for yourself, but the company seems keen to take it slow and cautious, probably to avoid past blunders.

An approach that was put into words several times during the keynote – “Bold and responsible.”

Instead, Google is banking on diverse use cases for its AI, including the generation and edition of images. Magic Editor is a new feature where you can use prompts to have Google’s AI edit your pictures without having to know how to use Photoshop and similar.

“Users can selectively edit specific parts of an image, such as the subject, sky, or background, for more control over the final appearance of their photos.”

If it is as impressive in real life as in the below mock-up, then it will surely be very popular. It is slated for a wider launch later in 2023.

Google Magic Editor in action.

Additionally, Google is bringing its AI to Android 14 in efforts to increase customisation. Users will be able to customise their lock screens and clocks, and choose new themes and even AI-generated wallpapers.

Finally, as part of the whole “responsibility” side of the company’s greater AI plans, Google is launching initiatives for users to see which images are created by AI and which are created by humans.

“The ‘About This Image’ tool helps users evaluate the reliability of visual content found online by providing important contextual information, such as when an image was first indexed by Google, its original appearance, and other online occurrences,” it says.

This will be a big help for digital artists, who have struggled with AI-generated art as of late, with some artists claiming they are losing commission businesses because of generation machines such as Stable Diffusion and MidJourney.

It will also help users distinguish fake news from real news.

Many of these features, products and services are still in the offing but it is clear that Google wants to take its AI far. During the keynote Pichai announced that its AI teams from Google Brain will be joining forces with teams from its DeepMind group.

“By creating Google DeepMind, I believe we can get to that future faster. Building ever more capable and general AI, safely and responsibly, demands that we solve some of the hardest scientific and engineering challenges of our time,” wrote DeepMind CEO Demis Hassabis in an open letter to employees.

“The research advances from the phenomenal Brain and DeepMind teams laid much of the foundations of the current AI industry, from Deep Reinforcement Learning to Transformers, and the work we are going to be doing now as part of this new combined unit will create the next wave of world-changing breakthroughs.”

Whether this will be enough to best Microsoft and OpenAI are yet to be seen, but it was more than enough to placate investors with Google’s stock jumping 5 percent after the string of announcements at I/O.

You can watch the full I/O keynote here:

advertisement

About Author

advertisement

Related News

advertisement