Google has enhanced its Gemini AI, making it more accessible for users by introducing new features for research and personalized assistance. The standout aspect is that many of these features can now be trialed with no initial cost, aiming to bring AI into everyday use for more people. A significant update is the rollout of the 2.0 Flash Thinking Experimental model. This version now accepts file uploads and allows Gemini Advanced users a context window of one million tokens.
Such enhancements enable the AI to handle more complex requests by simplifying them, thereby improving reasoning and response accuracy. This model also underpins Deep Research, a feature that swiftly gathers and synthesizes information online, now available to all users, not just subscribers, although free trials are limited to a certain number each month. Deep Research has undergone improvements to deliver detailed multi-page reports, letting users observe Gemini’s approach to research in real-time. This is aimed at enhancing the quality of the information provided.
Moreover, Google is rolling out an experimental personalization feature that links Gemini with Google apps and services, starting with Search. This allows Gemini to give tailored recommendations based on a user’s past activities, like suggesting restaurants or travel tips, while users can easily manage their Search history. Additionally, Gemini’s integration with Google services is broadening. It can now interact with Calendar, Notes, Tasks, and Photos, enabling complex tasks such as creating shopping lists from recipes or planning trips from images.
The full integration with Google Photos is coming soon, allowing users to inquire about details from their photos. Lastly, the new Gems feature enables users to create personalized AI assistants at no cost, with options to use file uploads for enhanced functionality. These updates could significantly simplify daily interactions with AI, demonstrating Google’s commitment to making these technologies integral to daily life.