OpenAI Introduces GPT-4o

The company behind the popular ChatGPT, OpenAI, recently announced the arrival of a new artificial intelligence model, called GPT-4o. This name indicates the model’s ability to handle text, voice, and video.

GPT-4o represents a step up from its predecessor, GPT-4 Turbo, offering improved capabilities, faster processing speeds, and savings for users. The model is available for both free and paid users, with some features being phased in over the coming weeks.

The new model has a significant improvement in processing speed, reducing the cost by about 50 percent, increasing speed limits by five times, and supporting more than 50 languages.

OpenAI plans to gradually introduce the new model for ChatGPT Plus and Team users, with access for businesses “coming soon.” The company also began rolling out the new model for ChatGPT Free users, albeit with usage restrictions, on Monday.

GPT-4o significantly improves the experience in ChatGPT, OpenAI’s chatbot. The model allows users to interact with ChatGPT in a more natural way, interrupting the conversation and receiving responses in real time. The model can also detect users’ emotions and respond appropriately.

In addition, GPT-4o introduces new viewing capabilities, allowing ChatGPT to quickly answer questions related to images or desktop screens. These features could evolve further, allowing ChatGPT to “watch” a live sporting event and explain the rules.

GPT-4o is also more multilingual, offering improved performance in about 50 languages. In the OpenAI API and service on Microsoft Azure, GPT-4o is twice as fast and half the price of GPT-4 Turbo, with higher speed limits.

During the demonstration, GPT-4o demonstrated that it could understand users’ emotions by listening to their breathing. When it noticed that a user was stressed, it offered advice to help them relax. The model also demonstrated that it could converse in different languages, translating and answering questions automatically.

OpenAI’s announcement shows how rapidly the world of artificial intelligence is evolving. Improvements in models and the speed with which they work, along with the ability to combine multimedia capabilities in a single omni-modal interface, are about to change the way people interact with these tools.

Condividi
Blog Archive