As it was already stated in advance the week beforeAndroid related updatesUnsurprisingly, the content announced at Google I/O 2025 focused heavily on Google's Gemini artificial intelligence technology, and made a number of updates.
Project Astra can handle more complex tasks
Among them, Google allows users to read screen display content and answer related questions through Gemini artificial intelligence, which is designed as a prototype of Google's general artificial intelligence.Project Astra, can now handle more complex tasks, such as helping users look up instructions for repairing bicycles, finding assembly videos on YouTube, or helping to determine which screws should be used. It can even help call the store to ask for the correct disassembly and assembly methods.
Google will continue to improve "Project Astra" and enable it to serve as an assistant service for users to interact naturally in daily life.
Gemini AI model update
At Google I/O 2025, Google announced the addition of a Deep Think feature to Gemini 2.5 Pro, enabling the model to further deliberate on user questions before providing an actual answer. However, this feature is currently in preview, and further expert feedback will be sought before it is officially released to the public. Currently, it is only available to trusted developers for testing via an API.
This time, the company also announced an update to Gemini 2.5 Flash, which mainly improves execution speed and response rate, while also reducing word consumption. Other improvements are aimed at improving application performance such as inference, multimodal operation, program editing, and the coherence of long texts. It is expected to be open to everyone in early June this year and is currently being previewed through Google AI Studio.
In addition, the voice of Gemini's text-to-speech (TTS) function has been made more natural through artificial intelligence. It can now support 24 languages including English, Italian, Korean, Japanese, Vietnamese, Spanish, Russian, French, and Turkish, and can also quickly switch to another language when a specific language is in use.
Gemini Live screen recognition function now available on more Android and iOS devices
Previously available for Pixel series phones, the screen reading function for images captured by the phone's camera lens will now be available for all compatible Android phones or compatible iOS devices, and will be able to analyze the content displayed on the screen through Gemini artificial intelligence.
In addition, the Gemini Live function will be further integrated with services such as Google Maps, Google Calendar, and Keep in the coming weeks. When users ask questions to Gemini Live, the system will further review the user's personal information stored in the service to provide more relevant and practical answers.
Google emphasizes that the privacy used in such services is mainly controlled by the users themselves, and all calculations are completed on the device side to avoid additional privacy disputes.
In addition, Google also stated that it will expand Gemini Live's reading function, expecting it to further understand the wider "world" and drive future general artificial intelligence or enable physical devices such as robots to operate.
Google Search Live service based on Google Lens
After announcing that the Google Lens service has attracted more than 15 billion users worldwide, Google also announced that the Google Search Live feature built on this service will be able to capture scenes through the mobile phone camera and use the Gemini service to help search for relevant answers.
To use this feature, users must manually click on the "Live" icon of Google Lens or switch the service to AI Mode.
Google Search's "AI Mode" feature update
The Google Search "AI Mode" feature, which was previously available to all English speakers in the United States, will now be switched to the Gemini 2.5 Pro model. Compared to the previous Gemini 2.0 model, it will significantly improve performance and add more application functions.
For example, you can search for specific clothing through Google Search and digitally "try it on" with personal photos. You can even use artificial intelligence to help track price changes. When the price of a product is reduced, artificial intelligence can help you place an order and complete the checkout through the Google Pay service.
Gmail gets smarter
After the new version of Gmail is integrated with the Gemini application, it will be able to aggregate the user's data stored in different Google services (this will be accessed only after obtaining the user's consent in advance). This will make Gmail's automatic reply function smarter, and even allow Gemini to imitate the user's commonly used words to reply to emails.
At the same time, Google will also allow users to manage emails in Gmail through Gemini, for example, filtering out emails that have not been read for three years and deleting them.
The new Gmail features will be available to Google Workspace users and are expected to be available this summer.
Gemini now available in Chrome browser
In addition to bringing Gemini to platforms such as watches and car systems, Google has also brought Gemini to the Chrome browser, allowing Gemini to answer details of the content on the web page.
Currently, Gemini can only answer the content within a single page, but Google revealed that it will analyze more paged content in the future.
Other updates
In this update, Google has added an instant interpretation function to the Google Meet online video function, allowing users to convert spoken content into different languages through artificial intelligence when communicating in different languages, and present it in the user's spoken voice. However, it currently only supports English and Spanish, and support for more languages will be added later.
"Project Mariner," which is built in the form of a Chrome extension and is connected to Gemini artificial intelligence functions, allows users to give commands verbally to analyze the content of the currently browsed web page and perform proxy operations. It is now open to more people and the number of tasks it can perform simultaneously has been increased to 10, thereby helping users perform more operational needs.
On the other hand, "Jules", which uses artificial intelligence to assist developers in coding and can integrate GitHub workflows, was also announced to be upgraded at Google I/O 2025, making it easier for users to create various coding application content.




