
Speaking at Emerson Collective’s Demo Day in San Francisco on Saturday (Nov. 22), OpenAI CEO Sam Altman and famed former Apple designer Jony Ive said OpenAI’s first device will be ready in “less than” two years.
Ive said the team already built a working prototype and plans to bring the device to market in “even less than” two years. He said the design centers on creating something people can use without hesitation, suggesting a device meant for everyday use rather than a specialized tool. While at Apple, Ive lead the design team for iconic products including the iMac, iPhone and iPad.
“I also love incredibly intelligent, sophisticated products that you want to touch, and you feel no intimidation and that you want to use almost carelessly,” Ive said. “That you use them almost without thought, that they are just tools.”
Altman said the intelligence behind the device should carry enough of the work that the hardware can recede into the background. He said the product aims to reduce the number of steps required to interact with AI and allow users to rely on natural inputs. His comments indicate that the company wants a device to act as an ambient assistant rather than another screen.
Both described a system that could understand a user’s activity across reading, communication and daily context, suggesting a deeper level of personal assistance than current consumer devices offer.
In May, PYMNTS reported that OpenAI acquired Jony Ive’s device startup io for about $6.4 billion, creating a dedicated hardware division inside the company. The acquisition unified Ive’s design direction with OpenAI’s expanding push into physical computing.
PYMNTS later reported comments from OpenAI CFO Sarah Friar, who described the device as “multimodal” and “provocative.” She said the product will support text, sound and sight and will not require users to look at a screen. Her remarks outlined how the device could function as a front door to OpenAI’s multimodal models and serve users through more natural interaction.
While vague, the statements from Ive and Altman do offer clues about the new device. They show OpenAI moving toward a device that avoids the bulk and sensory load of AR and VR headsets and instead leans into a lighter, more seamless form. The approach contrasts with Meta’s glasses, which add visual overlays and require users to stay within a defined display area. OpenAI appears to be taking the opposite path by removing visual hardware altogether and centering the experience on simplicity, ambient awareness and a nearly invisible interface.
Source: https://www.pymnts.com/
