Can OpenAI and Jony Ive Redefine Personal Tech?
- BY MUFARO MHARIWA
- Jun 18
- 3 min read

For years, tech giants have tried to create the ultimate AI companion. A tool that doesn’t just sit in your pocket or on your desk, but actively helps you wherever you are. Something that can translate in real time, nudge you with reminders, crunch quick calculations, or answer questions on the fly.
We’ve seen glimpses of this in Siri, Alexa and Google Assistant. But they’re still tied to clunky form factors like smartphones and smart speakers. Wearables like smartwatches, AR glasses, and even AI pins have tried to make the experience more seamless, but most have fallen short.
That hasn’t stopped the ambition. And now, two heavyweights, OpenAI and Jony Ive, are stepping into the ring, with billions in backing and a bold idea: to build an intelligent assistant that blends into your life as effortlessly as your own thoughts.
Who’s Involved?
It’s not just any pairing. This project brings together two of the most influential names in tech: one from the future of AI, the other from the golden age of industrial design.
Jony Ive, the former Chief Design Officer at Apple, shaped the modern era of personal tech. From the iMac to the iPhone, his minimalist philosophy and obsession with elegance turned hardware into cultural icons.
Advertisement
Then there’s Sam Altman, CEO of OpenAI. Under his leadership, ChatGPT went from an experimental chatbot to a household name, and becoming one of the fastest-growing consumer products in history. His company has already changed how we write, code, and communicate. Now, it wants to change how we carry AI with us.
The hardware design is being led by LoveFrom, Ive’s design firm. And the project reportedly has up to $6.5 billion in funding, with SoftBank said to be among the potential backers.
It’s an all-star lineup, one that’s betting big on a category that has so far disappointed more often than it’s delivered.
Why Other Attempts Have Failed
Plenty of companies have flirted with the idea of an AI companion. Most got the theory right, but the execution? Not so much.
Take the Humane AI Pin. It promised a sleek, screenless future; something you could clip onto your jacket and speak to like a Star Trek badge. But in reality, it was fiddly, unreliable, and far too limited to replace a smartphone. Most people didn’t want to pay a premium for a half-baked assistant that struggled with basics.

Then there’s the Rabbit R1. Billed as a personal AI that could manage tasks across different apps, it aimed to ditch screens and menus entirely. What users got instead was a buggy orange gadget with limited real-world utility. It was more proof-of-concept than product.

These devices stumbled because they tried to reinvent the wheel without the right parts. The lesson is clear: it’s not enough to just shrink tech down and slap on AI. You need the right mix of intelligence, interface, and human-centred design, which is something no one's quite nailed yet.
What We Know About the Device
While the product still doesn’t have a name, a reveal date, or even an official image, the concept is clear: a wearable or portable AI assistant that lives with you, not in your pocket, and not behind glass.
It’s designed to function through voice, gesture, and sensor input, creating a more seamless experience than current devices. The idea is to achieve what’s known as ambient computing: technology that quietly adapts to you, rather than you having to adapt to it.
This new device, at least in theory, is aiming to get both right: pairing OpenAI’s deep learning intelligence with Jony Ive’s unparalleled understanding of human-centred design.
Advertisement
Where Tech Might Be Headed
This collaboration is about more than just launching a new gadget, it’s about rethinking how we live with technology. If successful, this AI wearable could mark the beginning of a post-smartphone era, where intelligent tools blend into daily life rather than dominating it.
Instead of pulling out a device and tapping through apps, you might speak a thought aloud, gesture subtly, or simply move, and your assistant responds. It’s a shift towards invisible computing: no screens, no apps, just presence and utility.
If done right, this could:
Reframe how we use AI — making it ambient, not interruptive.
Change how we engage with tech in public spaces — quietly and without drawing attention.
Set a new design language — where simplicity isn’t a feature, it’s the foundation.
It’s a bold vision, but one that feels timely. With growing fatigue around screen time, privacy concerns, and feature-bloated devices, the world might be ready for something quieter, smarter, and more intentional.