gradient

Google has just announced the ability to chain actions in Gemini and it could change the way we use AI for good

Gemini can now chain actions together to complete complex tasksGemini Live is gaining multimodal abilities on the newest phonesGemini will evolve into a fully-powered AI assistant with Project Astra

To coincide with the launch of the Samsung S25 range of devices, at today’s Galaxy Unpacked, Google has announced some impressive updates to its Gemini AI platform. Many of the improvements are specific to devices like the new Samsung S25, but some also work on the older Samsung S24 and the Pixel 9 phones.

The stand-out feature is Gemini’s new ability to chain actions together. This means you can now do things like connect to Google Maps to search for nearby restaurants, then draft a text in Google Messages to send to people you’d like to invite to lunch, all through Gemini commands.

The chaining ability is being added to all devices that run Gemini, “depending on extensions”, which means that the extensions to link the particular app to Gemini will need to be written by a developer for them to be included. Naturally, all the major Google apps have extensions for Gemini already, but extensions are also available for the Samsung Reminder, Samsung Calendar, Samsung Notes, and Samsung Clock apps.

Gemini Live goes multimodal

Google’s Gemini Live, the part of Gemini that gives you the opportunity to have a natural, human-like conversation with the AI, is also getting some major multimodal upgrades. You will now be able to upload images, files, and YouTube videos to the conversation you’re having, so, for example, you could ask Gemini Live, “Hey, take a look at this picture of my school project and tell me how I could make this better”, then upload the picture, and get a response.

The Gemini multimodal improvements are not available across the board, however, and will require a Galaxy S24, S25, or Pixel 9 to work.

Project Astra

Finally, Google has announced that Project Astra capabilities will be coming in the next few months, arriving first on Galaxy S25 and Pixel phones. Project Astra is Google’s prototype AI assistant that enables you to interact with the world around you, asking questions about what you’re looking at and where you are using your phone’s camera. So, you can simply point your phone at something and ask Gemini to tell you something about it, or ask it when the next stop on your bus route will be.

Project Astra works on mobile phones, but takes your experience to the next level when combined with Google’s prototype hands-free AI glasses, so you can simply start asking Gemini questions about what you’re looking at, without having to interact with a screen at all.

While there’s still no news about a release date for this next generation of Google glasses, they will join Meta Ray-Ban glasses in the emerging market for AI wearables when they finally become available.

You may also like

Google Gemini will soon offer news updates from The Associated PressGemini looks set to replace Google Assistant on your wrist as Google plans a massive AI upgrade for Wear OSGemini Live is now on iPhone – here are the 3 best ways to use Google’s AI voice assistant