News India Live, Digital Desk: The first day of Google I/O 2025 ended with several declarations, which was to define how Artificial Intelligence is integrated with everyday products. From increasing core services such as search and chrome to developing tools for creators and developers, Google made it clear that Gemini is at the center of her AI development.
For users from India and around the world, this program marked a change in how AI is not only applied as a backnd capacity, but also applied as a front-facing assistant in apps, browser, devices and more. Google’s latest model series, Gemini, operated almost every demo and feature update this year.
AI captured Google Search
I/O 2025 search was the most high-profile update of Google. The AI overview launched last year is now reaching 1.5 billion users every month in more than 200 countries. Google has claimed that the look of the overview has increased by 10 percent in query engagement, which reflects an increase in user satisfaction.
A new feature called AI Mode was also introduced. It is described as a “fully recreation of the search”, it supports the more complex query and intensive relevant follow-ups. This new interface is powered by the custom version of Gemini 2.5 and now the U.S. Is being started in
Google previewed deep search, a new labs feature that is able to make hundreds of discoveries simultaneously to deliver research-grade reactions. Another upcoming labs feature, search live, combine the camera input with interactive reactions, allowing users to gain realization in real time depending on the environment around them.
Agentic capabilities are also being added. Search will now be able to do work like booking tickets or doing restaurants reservation with partners like Stubhub, Rasi and Ticketmaster.
For e-commerce, Google is presenting an AI shopping assistant that takes advantage of 50 billion listing of shopping graphs. Users can virtually try clothing items using the same image and complete the purchase through Google Pay with agentic checkout features.
Gemini app emphasizes intensive personalization
Large updates have been seen in the Gemini app, in which Google has described it as its most powerful, active assistant. It has started for Android, and hopefully iOS users will also get it soon.
Gemini Live, an audio-first interface using Project Estra technology, is now free. Google says that users are five times longer in voice conversation than text chat. The new generative media models for image and video, respectively, are embedded in the app.
Users can upload PDF and picture to create research summary. Canvas, an integrated tool, uploaded material can create infographics, quiz and podcast. Integration with Google Drive and Gmail is also on the roadmap.
Gemini will be integrated into Chrome from tomorrow, which will begin with summary and relevant discovery. It will eventually achieve the ability to work on tabs and websites.
$ 250 paid membership
Two new subscriptions launched: Google Ai Pro at $ 19.99/month and AI Ultra at $ 249.99/month. Pro users get tools such as NotebookLM and 2TB storage, while ultra subscribers have access to the initial stage tools including VO3, deep the think, and agentic features of the project merge.
Students of America, Brazil, Indonesia, Japan and UK can avail 15 months free AI Pro membership by sign up by June 30. Students in India still have to pay the full price If they want to use Google’s AI.
Developer facilities and model upgrades
Gemini 2.5 flash and pro models now offer better region and coding capabilities. Google AI studios are upgraded with the support of the Emazon and VO, which offers fast web app generation.
Jemma 3N is a mild multimodal model, adapted for phone and tablet. Gemini defusion promises 5 times faster image generation. Liria brings realtime live music generation.
Firebase AI Logic added support for Unity SDK and client-side monitoring dashboard. Jules, a github-integrated agent, is in public beta. Stitch can now generate UI components and codes from a natural language or visual prompt.
New XR Glass and Emergent Video AI-first are
Google unveiled its XR platform created for Gemini. Android XR will run on devices such as Samsung’s upcoming project Mohan Headset. Gemini will enable the relevant assistance based on Voice-Gied Navigation, Object Identification and User environment.
New Android XR glasses with cameras and in-lens displays were displayed. These glasses work with the phone and enable hands -free messages, translation and access to visual information. Partners include Gentle Monster and Warby Parker.
The successor of Project Starline transforms 2D videos into real -time 3D communication using Google Beam, AI and several cameras. The beam hardware will launch with HP later this year, which will be aimed at enterprise users.
Generative Video Generation and Synthi
The Imazon 4 and VO3 are now made public, providing basic support for VO dialogues and background sounds. Flow, a generative filmmaking tool operated by Gemini, being launched for Pro and Ultra customers in the US
To improve tracability, Google is expanding watermarking and detection tools for its AI-generated content. It is now being tested by journalists and researchers and will be open-source for widespread use.
NotebookLM gets video observation
NotebookLM received updates including audio overview, video overviews are going to be launched soon. Gmail will receive smart reply updates that reflect the user’s tone and history, and the meat is adding real -time speech translation.
Google Vids, AI-run video creators, Pro and Ultra Plan are also available for users. In all devices, AI is being used to reduce friction in productivity and learning workflows.
Google has reported an increase in the use of tokens in its AI system by about 50 times, which has increased from 9.7 trillion to 480 trillions in a year. The Gemini app now has more than 400 million monthly active users, and more than 7 million developers are using Gemini API.
Etihad Stadium: Kevin D Brune said goodbye with passionate farewell