Artificial Intelligence (AI) has been mostly reactionary up to this point: a user initiates a conversation or asks the AI to do something, and the AI reacts to the input.
AI agents will change that, but they too have limited knowledge. All of this is going to change with Google’s announcement of Personal Intelligence, which sounds like Microsoft’s Recall feature, but on steroids.
It marks a shift towards a deeply integrated AI, including into Android and ChromeOS. Unlike the Gemini app, which is mostly there to provide the user with information or create something for the user, Personal Intelligence is designed to know the user fully to provide a deeply personal experience.
- When does it start? It is rolling out for Google AI subscribers in the United States already.
- Opt-in or Opt-out? Personal Intelligence is opt-int.
- Where can you use it? Across Web, Android, and iOS.
- How does it work? The AI creates a local database of a user’s life that is based on emails, photos, calendar entries, messages, and application usage. It pulls data from various sources to know as much as possible about a particular user.
Google reveals that Personal Intelligence runs on Gemini Nano v3, which is optimized heavily for running on the neural processing units of Pixel 9 and Pixel 10 series devices. This allows it to process sensitive data on the device without the data leaving it at any stage of the process.
Unlike Gemini App, which is just a chat window for the most part, Personal Intelligence sees what the user sees on the screen. Furthermore, the AI may act on the user’s behalf with other apps.
Potential benefits for users
- The AI you interact with knows as much about your life as you do, at least when it comes to online activity. You can ask it vague questions using natural language.
- Personal Intelligence has capabilities to act on information. It can not only retrieve information, but also use information for actions.
- A proactive context that anticipates needs. It may recommend to leave early if it notices that traffic is heavier than usually, knowing where you will be based on calendar entries.
Potential points of criticism
- While Google emphasizes that sensitive data remains on the device, giving an AI full access to every pixel on the screen and your entire digital life is problematic, especially considering that Google is an advertising company first and foremost.
- It remains to be seen how an all-seeing AI impacts a device’s battery life.
- Even with access to personal data, AI may hallucinate, which means that it may return information that does not exist or run the wrong actions on the user’s behalf.
- Personal Intelligence is also locked into Google’s ecosystem of apps and services for the most part, as functionality with third-party apps is limited at this point.
Closing Words
With Personal Intelligence, Google is offering an interesting and at the same time frightening proposition: a life that is more and more controlled and managed by AI, and in exchange for that, total access to the life of the user.
For now, users have the key in hand. They do not have to enable Personal Intelligence and when they do not, nothing changes. However, when they do, they will effectively allow Google access to their life, connect the dots, and know more about the user than their closest friends or family members.
I want to hear from you: Is on-device processing enough to earn your trust, or does the idea of Google ‘reading’ your screen still feel like a step too far? Let’s discuss in the comments.

Trust? In Google? HAHAHAHA!!!! I have no good use for AI, regardless of use case. It may be fun to use it for meme creation or some such, but I’ll leave that to others. There is enough AI slop out there.
One more thing to disable. *sigh*
Keep it opt-in. All those benefits look to me as red flags.