The tech world just witnessed something remarkable at Google’s “Made by Google 2025” event in New York. While most people focused on the shiny new Pixel 10 lineup, the real story emerged from an exclusive Bloomberg interview with Rick Osterloh, Google’s Senior Vice President of Platforms and Devices. His candid revelations about Google’s future plans paint a picture of a company that’s thinking far beyond smartphones—into a world where AI-powered glasses and ambient computing devices will fundamentally change how we interact with technology.
The Pixel 10: More Than Meets the Eye
At first glance, you’d be forgiven for thinking Google phoned it in with the Pixel 10 series. Aside from new colors, the Pixel 10 Pro could easily be mistaken for last year’s Pixel 9. But according to Osterloh, that’s exactly the point. Google agrees but is arguing that it’s what’s inside that counts.
The new Pixel 10 lineup, starting at $800, represents Google’s most aggressive push into AI-first computing. While the external design remains largely unchanged, the internal transformation is revolutionary. The devices are powered by Google’s latest Tensor chips, which have been specifically engineered not just for smartphones, but as stepping stones toward a much more ambitious vision.
During the Bloomberg interview, Osterloh made it clear that Google isn’t just building phones anymore—they’re building the foundation for an AI-powered ecosystem that will eventually extend far beyond the rectangular screens we carry in our pockets.
The AI Revolution Hiding in Plain Sight
What makes the Pixel 10 series genuinely groundbreaking isn’t the camera bump or the color options. It’s the sophisticated AI capabilities running underneath the surface. The search giant discusses an AI-powered world beyond the smartphone. Every interaction, every photo, every voice command is being processed through Google’s Gemini AI, learning and adapting to individual user patterns in ways that previous generations of devices simply couldn’t match.
The real magic happens in the background. The new Tensor chips aren’t just faster—they’re smarter. They’re designed to handle complex AI workloads locally, reducing dependence on cloud processing and making the devices more responsive and privacy-focused. This local AI processing capability is crucial for Google’s long-term vision, particularly when it comes to augmented reality and wearable devices that need to process information instantly.
Google has also introduced the Pixel 10 Pro XL and a new foldable variant, the Pixel 10 Pro Fold, which represents their first truly dustproof foldable device. But these form factors are just experiments in preparation for something much bigger.
Beyond the Smartphone: Google’s Glass Revolution 2.0
Here’s where things get really interesting. While Osterloh was careful not to make any concrete announcements, his comments about smart glasses and augmented reality reveal Google’s true intentions. The company that famously stumbled with Google Glass nearly a decade ago hasn’t given up on the vision—they’ve been quietly building the technology stack needed to make it work properly this time.
Rick Osterloh was asked about Google Tensor in the context of AR, and the hardware boss shared some rare thoughts on smart glasses, suggesting that the computational power now available in their custom chips makes AR applications much more viable than they were during the original Google Glass era.
The integration of Gemini AI across all Google devices isn’t just about making phones smarter—it’s about creating a seamless experience across multiple device categories. Imagine smart glasses that can instantly translate conversations, provide contextual information about your surroundings, or seamlessly hand off tasks to your phone or laptop. The Tensor chips being developed today are the foundation for that future.
The Ecosystem Play: It’s All Connected
Company takes AI victory lap by showcasing Gemini on Pixel Watch 4, new foldable, and earbuds. Google’s strategy becomes clearer when you look at the entire product lineup announced at the event. The Pixel Watch 4, new Pixel Buds Pro 2, and even the magnetic Pixelsnap accessories aren’t standalone products—they’re components of a larger ecosystem designed to work together seamlessly.
The Pixel Watch 4, in particular, demonstrates Google’s commitment to ambient computing. With Gemini AI built directly into the watch, users can access intelligent assistance without pulling out their phones. This is crucial preparation for a future where our primary computing interface might not be a smartphone at all, but rather a combination of wearable devices working in harmony.
The new Pixel Buds Pro 2, available in a new Moonstone color to complement the Pixel 10 phones, represent another piece of this puzzle. These aren’t just wireless earbuds—they’re AI-powered audio interfaces that can provide information, translations, and assistance directly to your ears without requiring visual attention to a screen.
What Google Isn’t Saying (But Should Be)
Reading between the lines of Osterloh’s interview reveals some fascinating insights about Google’s strategic thinking. Rick Osterloh shares insights on the competitive landscape and the future of technology, including some surprising decisions about what Google won’t be pursuing in the near term.
According to recent reports, Google has made the strategic decision to halt development of new Pixel tablets and has decided against developing a smart ring, despite the success of competitors like Samsung in these categories. This isn’t about giving up on those form factors—it’s about focus. Google is concentrating its resources on the technologies that will matter most in the next phase of computing.
The company’s decision to skip certain product categories while doubling down on AI and AR capabilities suggests they believe the traditional categories of “phone,” “tablet,” and “laptop” are about to become much less relevant. Why build a tablet when you could build smart glasses that project a virtual display anywhere you look?
The Privacy Paradox
One of the most intriguing aspects of Google’s current strategy is how they’re handling privacy concerns. The new Pixel devices emphasize on-device AI processing, which means many of the smart features work without sending data to Google’s servers. This represents a significant shift for a company that has historically relied on cloud-based processing for most AI features.
This approach isn’t just about privacy—it’s about preparing for a future where always-on, always-listening devices need to process information instantly without network delays. Smart glasses, in particular, will need to provide information in real-time, making local processing capabilities essential.
The Competition Landscape
Google’s timing couldn’t be more strategic. While Apple focuses on incremental improvements to the iPhone and Samsung pursues foldables and wearables, Google is positioning itself for the next major platform shift. I came to Google years ago with the belief that AI can transform computing for EVERYONE.
The company’s willingness to keep the Pixel phones visually similar year-over-year while completely revolutionizing the internal capabilities shows a level of confidence that’s remarkable. They’re not trying to win the smartphone design wars—they’re building the foundation for whatever comes after smartphones.
What This Means for Consumers
For regular consumers, the immediate impact is a phone that gets smarter over time rather than just faster. The AI capabilities in the Pixel 10 series will continue to improve through software updates, learning from user behavior and adding new features that weren’t even possible when the device was first purchased.
But the longer-term implications are much more significant. Google is essentially asking consumers to buy into a vision of the future where their phone is just one part of a larger connected ecosystem. The investment you make in a Pixel 10 today is really an investment in Google’s vision for the next decade of computing.
The Road Ahead
Looking at Google’s current trajectory, it’s clear that the company is preparing for a major platform transition. The Pixel 10 launch represents the culmination of years of AI development, but it’s also the foundation for whatever comes next.
Google’s Rick Osterloh explains the future of AI on the company’s latest devices and discusses how Gemini is changing everything for the Pixel line. The integration of Gemini across all Google hardware isn’t just about making current devices better—it’s about creating the infrastructure for entirely new categories of devices.
The question isn’t whether Google will eventually launch new smart glasses or other AR devices—it’s when. The Pixel 10 series gives us a preview of the AI capabilities that will power those future devices, and if the current performance is any indication, the wait will be worth it.
The Beginning of the End
The Pixel 10 launch might look like just another smartphone release, but it represents something much more significant. Google is quietly building the technological foundation for a post-smartphone world, and the AI capabilities demonstrated in their latest devices are just the beginning.
While other companies are still fighting the last war—trying to build better rectangles with screens—Google is preparing for the next one. The combination of powerful on-device AI, seamless ecosystem integration, and a clear vision for augmented reality suggests that the company is closer than ever to delivering on the promise of truly ambient computing.
For consumers, the choice is becoming clear: you can buy a phone that’s a little bit better than last year’s model, or you can invest in a platform that’s being built for the future of computing. The Pixel 10 series represents Google’s bet on that future, and based on what we’ve seen so far, it’s a bet worth taking seriously.
The smartphone era as we know it is coming to an end. The question is: are you ready for what comes next?
