OpenAI is reportedly accelerating work on advanced audio artificial intelligence as it prepares to launch a personal device that relies primarily on voice rather than screens, marking a shift toward audio-first computing. The company has reorganized teams to improve its audio AI, which currently lags behind its text models, and is developing a new architecture that produces more natural, emotionally expressive speech and better handles conversational interruptions. This improved model is expected to arrive in early 2026. Sources say the first device will be “largely audio-based,” and OpenAI is considering a family of gadgets, potentially including smart glasses or screenless smart speakers, with design input from former Apple design chief Jony Ive after OpenAI’s 2025 acquisition of his hardware company io. The goal is to move beyond traditional screens toward more ambient, voice-driven AI interactions.

Recent news