Since OpenAI launched ChatGPT in late 2022, the tech world has been racing to improve artificial intelligence. Over the past few years, tech companies have worked hard to develop AI, often trying to match OpenAI’s success. However, a standout use for AI hasn’t clearly emerged yet. In 2024, we saw companies experiment by adding AI to almost everything, hoping to discover what truly works.
In 2025, things are set to change. AI will become a seamless part of everyday tech, shifting how we use it. Right now, using AI often requires extra steps, like opening the ChatGPT website or using tools like the Gemini assistant. But in 2025, AI will be built directly into operating systems and apps, ready to perform tasks automatically without you having to ask.
We’ve already seen hints of this future. For example, Google is testing Project Astra with select users, Android XR is launching as the first OS powered by Gemini, and Apple Intelligence now includes ChatGPT integration.
Tech leaders have shared their plans for AI in 2025, so we don’t need to guess what’s coming. However, important questions remain: Which companies will keep their promises? How will AI change how we use phones, tablets, and wearables? And what does this mean for privacy?
Get ready for 2025 — a year where AI becomes smarter, more integrated, and a bigger part of our lives.
AI Will Become the Core of Mobile Operating Systems
Until now, artificial intelligence has mainly powered individual apps and features. Some have been big hits — like ChatGPT, which had 200 million weekly users by August 2024. Others, like Gemini, have seen more modest success, with 780,000 downloads in September 2024 compared to ChatGPT’s 4.2 million downloads that same month.
Attempts to build hardware devices centered around AI, however, have largely flopped. Examples from this year include the Rabbit R1 and Humane AI Pin. The Rabbit R1, for instance, was a small AI device with a “Large Action Model” designed to perform tasks for users. It could order food using DoorDash or play music via Spotify by accessing these apps remotely on your behalf.
These devices failed because people already own powerful smartphones capable of handling AI tasks more effectively. Specialized AI gadgets couldn’t compete with mainstream devices like phones and tablets, which are familiar, versatile, and widely adopted.
Looking ahead to 2025, the capabilities of AI-driven hardware like the Rabbit R1 will merge into everyday devices. Smartphones, tablets, and wearables will become more integrated with AI, taking over tasks and making them as effortless as possible. Your current devices will soon deliver what niche AI gadgets couldn’t — a smarter, more seamless experience right in your pocket.
Agentic AI Models Aim to Succeed Where Rabbit Failed
Rabbit’s “Large Action Model” (LAM), designed to take actions on behalf of users, failed to capture attention. But the concept of agentic AI — smarter, more proactive models — is what companies like Google aim to perfect.
According to Google CEO Sundar Pichai, this new wave of AI is about creating systems that understand the world better, plan ahead, and take actions under user supervision. “Over the last year, we have been investing in developing more agentic models,” Pichai said in a blog post introducing Gemini 2.0. These models leverage advances in multimodality, such as handling images, audio, and text simultaneously, as well as using tools natively. The goal? A universal assistant that combines understanding, context, and action seamlessly.
What does this vision look like in practice? It varies by company. Google’s Project Astra, for example, is a multimodal AI assistant I experienced at Google I/O 2024. It can use your environment and external tools to process complex queries. Imagine asking Astra a question verbally while it uses a camera feed or search engine to gather relevant context. Its response could include written text, a generated image, spoken words, or all three combined, offering a richer, more dynamic interaction.
This new generation of AI promises to redefine how we engage with technology, moving beyond simple assistants to intelligent, action-oriented agents capable of truly supporting users in their daily lives.
AI Expands Across Devices and Platforms in 2025
Project Astra, currently in testing through Google’s “trusted tester program,” is already making waves. Google has demonstrated its capabilities on phones, glasses, and headsets, hinting at a broader rollout soon. The company also unveiled Android XR, a new operating system designed for headsets and wearables, with Gemini AI at its core. It’s likely we’ll see Project Astra on hardware in 2025, potentially including Pixel phones and an unannounced Samsung headset.
Google isn’t stopping there. The company is also working on Project Mariner, a research prototype designed to browse the Chrome browser for you, showcasing its efforts to make AI more hands-on.
Other tech giants are also pushing the boundaries of AI integration. Meta has introduced multimodal AI support on its Ray-Ban Meta glasses, while Apple’s Visual Intelligence is now available on iOS 18.2, enhancing how users interact with their devices. OpenAI is also in the race with ChatGPT-4o, a multimodal assistant similar to Google’s Project Astra.
As these AI systems evolve, they promise to transform how we interact with technology, offering smarter, more immersive experiences across phones, wearables, and even augmented reality devices.
2025: The Start of the Agent-Based AI Era
With iOS 18.2, Apple has made ChatGPT accessible system-wide, allowing users to invoke it anywhere through Siri. Similarly, Android users can set Gemini as their default assistant, while Apple Intelligence’s Writing Tools are now integrated directly into the keyboard. Samsung plans to catch up with One UI 7, embedding AI features throughout its ecosystem.
In 2025, AI will evolve in two key ways:
- Agent-Based AI: Advanced services like Project Astra, ChatGPT-4o, and Visual Intelligence will take over tasks on your devices using multimodal processing. These systems will interact with the world around you, combining text, images, and actions to deliver seamless assistance.
- Integrated AI Features: Smaller, everyday AI tools will become part of operating systems, removing the need to open separate apps. Whether it’s composing text, setting reminders, or answering questions, services like Gemini and ChatGPT will be available everywhere in your device’s interface.
As Demis Hassabis, CEO of Google DeepMind, put it in an interview with The Verge, “We really see 2025 as the true start of the agent-based era.” This next phase of AI will make devices smarter, more proactive, and deeply integrated into our lives.
AI Privacy and Safeguards Will Be a Key Focus in 2025
As AI becomes more deeply integrated into our devices, privacy and security will be under the spotlight in 2025. However, navigating what’s public and private in the AI era may become increasingly complicated.
Apple Intelligence is often touted as the gold standard for AI privacy. The company uses custom Private Cloud Compute servers running a hardened operating system and even offers a million-dollar reward for anyone who can breach its defenses. Despite these efforts, Apple’s privacy model isn’t entirely transparent.
There are three security levels in Apple Intelligence:
- On-device Processing: Tasks handled by the Neural Processing Unit (NPU) in Apple’s silicon chips.
- Private Cloud Compute: Tasks outsourced to Apple’s secure servers.
- Hybrid Tasks: You won’t always know whether a task is processed on-device or in the cloud — Apple decides, and users must trust their choices.
Additionally, Apple treats ChatGPT integration as separate from Apple Intelligence. While Apple provides some privacy protections, user requests to ChatGPT are still shared with OpenAI. If you link your ChatGPT account to iOS 18 for access to ChatGPT Plus or Pro, you’re agreeing to OpenAI’s privacy policies, not Apple’s.
In 2025, AI features will be so embedded in our devices that understanding all their privacy implications will feel overwhelming. Much like blindly accepting “terms and conditions” when setting up a new device, users may have to trust companies to handle their data responsibly. Unfortunately, the only way to know if companies fall short will likely be through failures or data breaches, forcing the industry to reactively address gaps.
As we enter this new era, it’s crucial to stay informed about how AI tools handle your data and to advocate for stronger transparency and accountability from tech companies.
Leave a Reply