
Key takeaways of zdnet
- Openai released its long-awaited GPT-5 on Thursday.
- Some users complained that GPT-5 was inferior to its predecessor, 4o.
- In response, the company announced a hurry to the changes.
Openai released the GPT-5, which is a long-awaited upgrade to the model powering the chip on Thursday. In specific openi fashion, release includes a lot of twists, turns and drama.
It was almost indispensable that the new model would disappoint a significant number of people, given how intensity it was for its release. On August 2, Xikun Zhang, a research scientist from Openi, wrote in an X post that “GPT-5 seems to be the most anticipated product launch in history.” (The post was later removed.)
(Disclosure: ZDNET’s original company Ziff Davis filed a case of April 2025 against Openai, alleging that it violates Ziff Davis copyright training and operating its AI system.)
Also: You can still access GPT-4O, O3, and other old models in chat
Certainly, GPT-5 has made more debut than a bang.
The main power of GPT-5, which has gone into large lengths to emphasize Openai, has the ability to compatible between the collection of models based on the nature of a particular signal. While there is a good idea in theory, it has become uncontrolled for users in practice, many of which are now requested from frustration that the company brings back the earlier model of 4O, GPT-5. GPT-5 also performed poorly in coding tests run by ZDNET.
It did not help that several charts displayed incorrect information during a livestream presentation of GPT-5 on Thursday, a slip-up that the Internet has taken as a “chart crime”. The one-time graph comparing the new model performance on the Swe-Bench test, for example-gauge the coding skills of the model-a comparatively low score to the GPT-5, showing a comparative score, which is being displayed within a high level.
Also: I tested the coding skills of GPT-5, and it was so bad that I am sticking with GPT-4O (for now)
All this has probably taken a busy handful of OpenaiI engineering and media teams for a handful of days as they deal with the results of this latest Snaphy. A collection of X posts from both the company and its CEO, Sam Altman, from Thursday, provides a window on how the company has responded to, and various patch which have been applied to keep the GPT -5 dripped ship.
What has been updated?
One Friday X post A string of updates from Altman was involved in GPT-5 in response to the initial user response. They include:
-
Doubling the rate limit for GPT-5 for Chatgpt Plus customers. According to openi WebsitePlus users can now send 160 messages every three hours through GPT-5. Once the border has become a hit, the model will automatically switch to a more limited mini model until the use time resets. Openai also said that it was “a temporary growth and would return to the previous range in the near future.”
-
To allow some paid customers to continue to use and use 4Os. One in X post On Friday, Openai wrote that Chatgpt Plus and team user can use models through an option to read “show Legacy Model” in Settings. The same post said that GPT-5 was successfully rolled out for all Chatgpt Plus, Pro, Team and Free Users.
-
It is easy for users to see which model is being used to respond to a given query. In the same Friday X post, Openai said that users can now find this information on the right to the right of the menu that appears below the chatbot’s reaction to the circular arrow icon.
-
Adding measures to ensure the use of the correct model for each response. (Altman mentioned in its Friday post that the model’s autoswority mechanism was broken for the previous day, so that it seemed to be “the way.”)
Next step
In Another x post On Friday, Altman said that the initial, minimal-steller reception for GPT-5 provided important lessons around the user interface and experience, which would be taken into consideration as the chat continued to develop.
Also: Openai’s GPT-5 is now free for everyone: how to access and everything else we know
“For a long time, it has strengthened that we really need good ways to customize things for various users,” he wrote. “For a foolish example, some users in fact, really like emozis, and some never want to see one. Some users really want cold arguments and some heat and some have a different kind of emotional intelligence.”
Altman said that the company was “doing brave work” to create an optimal user experience amidst the frequent shifting demands of a major product rollout, all that has worked within the obstacles due to the supply of GPU, not everyone likes: “Everybody will finish with whatever the tradeoffs will finish, but at least how we are deciding.”

