# Intro to GPT-4o *Author: William M. Peaster* *Published: May 14, 2024* *Source: https://www.bankless.com/intro-to-gpt-4o* --- This week, OpenAI introduced its new state-of-the-art AI model: [GPT-4o](https://openai.com/index/hello-gpt-4o/). It’s a beast—preliminary tests show it blowing other major models out of the water when it comes to [basic](https://twitter.com/LiamFedus/status/1790064963966370209) and [complex](https://twitter.com/LiamFedus/status/1790064966000848911) prompt requests.  [![](https://lh7-us.googleusercontent.com/3yXUoKmtU5u09WwasKDk9MZ-egn2ckixmUMUIbzKPctyf3MEQasH2iH145s5XWluZmXIxgPQAJdJNbyhivREQIc09ipUWPbRCZ2TRnJqotZ5CLRT8Iw5phE76mtW_0eJDQPWLK7D4NIr6j1DCtgbX0A)](https://twitter.com/LiamFedus/status/1790064966000848911)*via [William Fedus](https://twitter.com/LiamFedus/status/1790064966000848911)* The “o” in GPT-4o stands for *omni*, as it was designed from the start to support real-time inputs and outputs via audio, text, and visuals or any combo thereof.  This approach is optimized over regular ChatGPT-4, which is still powerful, though mainly centered around text-based interactions. In contrast, GPT-4o is streamlined to understand and generate content from text, images, and sounds, making it a versatile tool for all sorts of applications.  Additionally, GPT-4o is faster at voice responses, more accurate in non-English languages, and 50% more efficient than its predecessor model. These improvements make GPT-4o more practical for everyday use cases, with basically endless personal and professional possibilities.  [![](https://bankless.ghost.io/content/images/2024/05/image---2024-05-14T134933.190.png)](https://openai.com/index/hello-gpt-4o/)*via [OpenAI](https://openai.com/index/hello-gpt-4o/)* For example, imagine generating a script, visual storyboard, and audio cues for a VR experience all within the *same* platform. Or creating a metaversal art installation that can respond to visual and audio inputs from viewers through the [GPT-4o API](https://community.openai.com/t/announcing-gpt-4o-in-the-api/744700). Of course, one of the most exciting things GPT-4o is that, for the first time, a major breakthrough AI model like this is available for free to try by anyone, with no premium subscription to ChatGPT required. As OpenAI noted in their announcement yesterday: > “*GPT-4o’s text and image capabilities are starting to roll out today in ChatGPT. We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits. We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks*.” That said, if you’re interested in trying the new GPT-4o model, head over to [chatgpt.com](https://chatgpt.com/), sign in or create an account, and then check and see if it’s already available to you in the “Models” dropdown menu like so: [![](https://lh7-us.googleusercontent.com/-hn4FedugvBQ64aA5ZuOjPnMMOk5UdHcYFVTk8eXV3GM85X59KcCjQSev_VTRE2wrUXlctekg_VXlHKgeCvMVil_YK1BiuaTJb6JrIv2a4t1QcMtUvNebiVnJy9nH5ZTsHcipkJ40DCuQk6Fw75wI6E)](https://chat.openai.com/)*via [ChatGPT](https://chat.openai.com/)* Remember that the full voice and video capabilities of GPT-4o [aren’t rolled out to everyone yet](https://twitter.com/OpenAI/status/1790130708612088054), but these will be coming in the near future.  In the meantime, get to prompting and experimenting as a free or premium user, as you’ll want to see this kind of power for yourself. Also, be sure to check out some of the new demo videos to get a better feel for what this impressive model has to offer going forward: - 💬* *[*Realtime translations*](https://twitter.com/OpenAI/status/1790089500246323309) - 😎* *[*Narrate your surroundings*](https://twitter.com/OpenAI/status/1790089505984151940) - 📝* *[*Summarize live meetings*](https://twitter.com/OpenAI/status/1790089509746376893) - 🧑‍💻️* *[*Coding assistance*](https://twitter.com/OpenAI/status/1790130701339160887) - 🧮* *[*Assistance with math problems*](https://twitter.com/OpenAI/status/1790089513387143469) - 🤳* *[*Point and learn foreign words*](https://twitter.com/OpenAI/status/1790089515375214798) --- *This article is brought to you by [MegaETH](https://www.bankless.com/sponsor/rabbithole-1773696872?ref=intro-to-gpt-4o)*