OpenAI announced on Monday a new generative AI model called GPT-4o with the ability to handle text, speech, and video.
OpenAI CTO Mira Murati said during a streamed presentation on Monday that GPT-4o provides "GPT-4-level" intelligence but improves on GPT-4's capabilities across multiple modalities and media.
The "o" stands for "omni," referring to the model's ability to handle text, speech, and video. It is set to roll out "iteratively" across the company's products over the next few weeks, OpenAI said.
"GPT-4o reasons across voice, text, and vision," Murati said. "And this is incredibly important because we're looking at the future of interaction between ourselves and machines."
GPT-4 Turbo, OpenAI's previous leading model, was trained in a combination of images and text and could analyze images and text and even describe the content of the images. But GPT-4o adds speech to the mix, thus greatly improving the experience in OpenAI's AI-powered ChatGPT with voices in "a range of different emotive styles."
GPT-4o upgrades ChatGPT's vision capabilities in addition. Given a photo, ChatGPT can quickly answer related questions, according to the presentation.
"We know that these models are getting more and more complex, but we want the experience of interaction to actually become more natural, and easy, and for you not to focus on the UI at all, but just focus on the collaboration with ChatGPT," Murati said. "For the past couple of years, we've been very focused on improving the intelligence of these models ... But this is the first time that we are really making a huge step forward when it comes to the ease of use."
GPT-4o is also more multilingual with enhanced performance in around 50 languages, the company said.
It plans to first launch support for GPT-4o's new audio capabilities to "a small group of trusted partners" in the coming weeks.