Here are all of Google's AI features announced during the Pixel 9 series launch

So many features were introduced during Made by Google 2024. How will you use AI? #Google #Geminiai #MadeByGoogle

Note: This article was first published on 14 August 2024.

The Pixel devices, supercharged by AI. Photo: Google

The Pixel devices, supercharged by AI. Photo: Google

Google's hardware product announcement, Made by Google 2024, saw a hour-and-a-half keynote  that introduced the Pixel 9 series alongside the Pixel 9 Pro Fold, the Pixel Watch 3, and the latest Pixel Buds Pro 2.

Google's proprietary AI, Gemini, is at the centre of all these devices. In case you missed it, here are all the latest AI features introduced during the keynote. We've summarised it as a guide to using AI, so you know where's what and whether you need these features.

What is Gemini?

Gemini, the artificial intelligence (AI) formerly known as Bard (not the D&D character class), is Google’s consumer-facing advanced AI model. Like the horoscope's unscientific stereotype, the name represents the AI model's propensity to evoke intellectual curiosity, versatility, and adaptability (at least, that's what we think Google was trying to do here).

Unlike most AI models, Gemini can simultaneously understand and generate various forms of data, including text, code, documents, audio, images, and even videos. In AI parlance, Gemini is considered multimodal.

Gemini was designed to be versatile. It can be used to mitigate tedious or troublesome tasks in various aspects of daily life. Unfortunately, being capable of every conceivable chore would mean having a massive model that most devices cannot handle.

This is why Gemini has various models. Each was designed to perform optimally for specific outcomes. From its Google AI for Developers website, you have:

  • Gemini Flash, a cost-effective model for narrow, high-frequency tasks (chores)
  • Gemini Pro, a multimodal model for reasoning tasks that boost your performance and productivity (creation)
  • Gemini Ultra, the largest and most capable offering out of all Gemini options (catch-all option)
  • Gemini Nano, for on-device AI tasks

The Pixel 9 series uses Gemini Nano to process prompts and handle all requests on-device. This AI model shares the same name as its on-device voice assistant, which we cover below.

Gemini AI Assistant

Just another voice assistant? Photo: Google

Just another voice assistant? Photo: Google

Integrated into the Pixel 9 series is the Gemini AI Assistant.

Users can access Gemini by simply pressing the power button and using the prompt window to interact with the AI via text, voice, or even images. Gemini understands complex queries, follows conversational threads, and can respond to on-screen content in their Pixel devices.

In day-to-day use, this means users can effortlessly switch from scheduling appointments and recalling bits and pieces of information to asking questions about a YouTube video they're watching without needing to switch between apps

Now you understand why Google made such a big deal to highlight its AI efforts on its phones. Below, we look at some other Pixel features that rely on AI, but are located elsewhere on the phones.

Enhanced Photo and Video Capabilities

No more holding the mirror to take a photo of the cameraman now. Image: Google

No more holding the mirror to take a photo of the cameraman now. Image: Google

Google has significantly upgraded its photo and video features with AI, making the Pixel 9 series a powerful tool for photography enthusiasts.

The "Add Me" feature uses augmented reality to include the photographer in group shots, solving the problem of the picture-taker being left out.

Meanwhile, Magic Editor while not new, has also been improved to suggest optimal crops through Auto Frame, and expand images using generative AI. 

These features uses both generative and machine learning AI. They allow users to capture perfect group photos without a tripod, or enhance travel photos with AI-suggested edits, all without requiring advanced editing skills. 

Pixel Studio

Pixel Studio, as demonstrated by Alex Schiffhaeur. Screenshot: Google

Pixel Studio, as demonstrated by Alex Schiffhaeur. Screenshot: Google

For those who enjoy creating visual content, this new Pixel Studio app brings AI-powered image generation and editing directly to Pixel 9 phones.

Like other image-generating tools, Pixel Studios creates images based on text prompts, allowing easy editing and sharing.

Unsurprisingly, not all text prompts will lead to image generation, likely due to safety or censorship.

Either way, Pixel Studios helps alleviate low-level design needs, such as customising greeting cards for personal use or creating wacky social media posts.

Can’t remember or can't find something? Gemini AI can

Perhaps you can also search for other types of "receipts". Screenshot: Google

Perhaps you can also search for other types of "receipts". Screenshot: Google

Google Keep and the new Pixel Screenshots feature provide smarter note-taking and information management.

The AI helps with creating lists, and it can also organise and analyse screenshots taken from the past. This makes it easy to summon specific information, like a screenshot of your grocery list or recall dates of important events. 

The new call notes feature automates call summaries and extracts info found in phone conversations. Call Notes generates the summary of phone conversations and identifies key information, allowing users to keep track of details, like appointments or to-dos, without manual note-taking. This can be especially useful for those who spend plenty of time engaged in numerous calls throughout the day. 

Lastly, the upgraded Circle to Search makes information sharing and search more social. Users can circle on-screen content and instantly share or search for it, enabling them to quickly share details of a restaurant place, for example, or look up unfamiliar places while watching a YouTube video. 

A better weather app?

Even the Weather App is getting AI features. Screenshot: Google

Even the Weather App is getting AI features. Screenshot: Google

You heard that right. Even the weather cannot be spared from AI.

Google's new weather app offers more personalised and accurate forecasts. It provides an AI-generated custom report and precise timing for weather events, going beyond your traditional weather app forecasts.

This enhanced functionality allows users to plan outdoor activities, potentially saving time and effort in the event of sudden inclement weather.

Through Gemini, you can even receive clothing recommendations based on the weather forecasts.

Pixel Watch 3 features

Undoubtedly a life-saving feature. Screenshot: Google

Undoubtedly a life-saving feature. Screenshot: Google

Google’s latest Pixel Watch 3 incorporates Gemini AI to enhance health and fitness tracking.

It offers real-time automatic sleep detection, advanced running metrics, and AI-powered run recommendations. Users can also get personalised sleep insights, improve their running form with AI analysis, and receive tailored workout suggestions based on their fitness goals and previous activities. 

Sounds normal so far… Right? The next feature is by far Google’s most ambitious yet: The Loss of Pulse detection feature

According to Google, a sudden loss of pulse is a critical medical emergency that often occurs when a person is alone. The Pixel Watch 3 introduces a feature to address this issue and potentially save lives.

By combining the data obtained from the sensors onboard the watch (red and infrared sensors for oxygen (Sp02) monitoring, multipurpose electrical sensors used for ECG monitoring, electrical sensors to measure skin conductance for body response tracking, a skin temperature sensor, and more), it leverages off pre-trained AI to detect the loss of pulse and automatically call emergency services. 

According to Google, this life-saving feature has been rigorously developed and tested with medical experts, which is further bolstered by the combination of sensors needed to make this possible. While it is not foolproof, it offers a step forward in providing timely assistance to those in need. 

Pixel Buds Pro 2 enhancements

Hands-free access to Gemini Assistant. Photo: Google

Hands-free access to Gemini Assistant. Photo: Google

Implanting generative AI into audio products is relatively new. We first saw it with Nothing and their integration of ChatGPT, although that relies on the Nothing phone's proprietary app. Samsung also has Live Translate in its Galaxy Buds 3 series, but that relies on the phone's Galaxy AI, and not the earbuds.

The latest Pixel Buds Pro 2 leverages Gemini AI for enhanced audio experiences and capabilities. The earbuds feature enhanced noise cancellation using AI and provide hands-free access to the Gemini assistant.

From our Pixel Buds 2 Pro news, there are two distinct modes: Gemini, which helps you get song recommendations and walking directions when the phone's locked, and Gemini Live, which avails the conversant AI when the phone is unlocked. You need a Pixel phone to use Gemini, just like the other competing integrated options above.

In daily use, this translates to clearer audio in noisy environments and the ability to access Gemini’s capabilities without taking out your phone. When the Pixel phone is unlocked, you can set reminders like “remind me to buy coffee” or get directions through Gemini Live to help you stay organised throughout the day.

Furthermore, you could have a real-time conversation using Gemini Live where you can ask questions or use it to keep yourself entertained. 

To help keep things fresh, Gemini Live is getting 10 new unique voices

With so much AI, is it safely stored and used?

Can you trust it? Photo: Google

Can you trust it? Photo: Google

Google said it has placed a strong emphasis on privacy and security in the implementation of Gemini AI across Android devices.

The company assures users that their personal information remains protected while using AI features. For tasks that require cloud AI processing, Gemini can only securely access and utilise personal data from Google services to provide tailored assistance with user permission. 

Behind it all is the integration of Gemini Nano, a large on-device multimodal AI model incorporated into Android. This allows for more sensitive tasks, such as call notes or summarising phone calls, to be done entirely on-device without data needing to leave the phone. 

Even so, whether the data is processed in the cloud or on the device, Google stresses that it remains within Google’s secure end-to-end architecture. This approach aims to keep user information with a walled-garden approach, unlike other implementations where information is handed off data to third-party AI providers.

Do I need all these features?

These AI enhancements across Google's Pixel ecosystem aim to make daily tasks less of a chore and more user-friendly.

With AI being all the rage, plenty of competitors exist. Apple’s Apple Intelligence and Samsung's Galaxy AI are out there, too. However, these AI tools do not answer the question: Do you need them?

Maybe not at this point, much likehow it took time for people to get used to tapping and paying for public train and bus rides with a phone. At some point, using AI to optimise your day could become as natural as breathing air and drinking water, so it's better to practice and be literate now, when AI is still in its early days.

Do you have a favourite AI feature announced by Google? Let us know. In the meantime, be sure to stay tuned for our review of the Pixel 9 series, Pixel 9 Pro Fold, Pixel Watch 3, and, of course, the Pixel Buds Pro 2.

Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.

Share this article