Openai on Tuesday announced the launch of two open-weight AI Reasoning models with its O-series capabilities. Both online developers are free to download from platforms Throat faceThe company said, “The model is measured in several benchmarks to compare the open model, describing the model as” art of art “.
Models come in two sizes: a large and more capable GPT -SS-12B model that can run on a single NVidia GPU, and a light-wisdom GPT -SS-20B model that can run on a consumer laptop with 16GB of memory.
The first ‘Open’ language model of OpenaiI, launched since GPT-2, was released more than five years ago.
In a briefing, Openai said that its open models would be able to send complex questions to the AI model in Cloud, as TECRCRunch had previously reported. This means that if the open model of Openai is not able to a certain task, such as processing an image, developers can connect the open model to one of the company’s more capable closed models.
While Openai opened the AI model in its early days, the company has generally favored a owned, closed source development approach. The later strategy has helped OpenAII to build a large business sales for its AI models for enterprises and developers through APIs through APIs.
However, CEO Sam Altman said that in January, he believes OpenEE “on the wrong side of history” when it talks about opening its technologies. The company today faces increasing pressure from Chinese AI labs – which includes Deepsek, Alibaba’s Qwen and Moonshot AI – who have developed many most competent and popular open models in the world. (While the Meta previously dominated the AI space, the Lama AI models of the company have lagged behind in the previous year.)
In July, the Trump administration urged us AI developers to open more techniques to promote AI’s global adoption with American values.
Techcrunch event
San francisco
,
27-29 October, 2025
With the release of GPT -SS, Openai is expected to prefer equally with developers and Trump administration, both have seen Chinese AI labs prominently in open source location.
“When we launched in 2015, the mission of OpenaiI is to ensure AGI that benefits everyone of humanity,” Altman said in a statement shared with Techchchan. “By that end, we are excited to build an open AI stack for the world in the United States, which are available for free and for widespread benefits, based on democratic values.”

How did the model perform
Openai aims to make its open model a leader among the other open-weight AI models, and the company claimed that it was done.
Codfores (with tools), a competitive coding test, GPT -SS-120B and GPT -SS-20B scores 2622 and 2516, respectively perform the O3 and O4-Min, perform better than the R1 of Deepsac.

On the final examination of humanity (HLE), a challenging test of congested questions for a wide variety of subjects (with equipment), GPT -SS-120B and GPT -SS-20B scores 19% and 17.3%. Similarly, it improves the leading open models leading from underperforms O3 but deepsek and Quven.

In particular, open models of Openai have much more hallucinations than their latest AI logic models, O3 and O4-Mini.
The latest AI logic model of Openai is becoming more serious in the model, and the company earlier said it does not make much understanding. In a white paper, Openai says that it is “expected, because small models have less world knowledge than large frontier models and more hallucinations.”
Openai found that the GPT -SS-100 and GPT -SS-20B, respectively, halted in response to 49% and 53% respectively, which is on questions on the company’s in-house benchmark to measure the accuracy of a model’s knowledge about people. This is higher than the halight rate of O1 model of Openai, which scored 16%, and more than its O4-Min model, which scored 36%.
New model training
Openai states that its open model was trained with similar processes as its proprietary model. The company says that each open model takes advantage of a mixture-off-exparts (MOE) to tap low parameters for any question, making it more efficiently. For the GPT -SS-100B, which has 117 billion parameters, Openai says that the model only activates 5.1 billion parameters per tokens.
The company also says that its open model was trained using high-compute reinforcement learning (RL)-a post-training process to correctly teach AI model properly in a fake environment using large groups of NVidia GPU. It was also used to train the O-series of OpenEE, and open models have a similar chain-thinking process in which they take additional time and computational resources to work through their answers.
As a result of the post-training process, Openai says that its open AI models excel in powering AI agents and are capable of calling devices such as web search or python code execution as part of their series-thinking process. However, Openai says its open models are only text-texts, meaning that they will not be able to process or generate images and audio like other models of the company.
Openai Apache 2.0 is releasing the GPT-comment and GPT-com-20B under the license, which is usually considered one of the most permissible. This license will allow enterprises to be muddy to the open model of Openai without allowing or receiving it from the company.
However, unlike the offerings of the open source from AI 2 like AI2, OpenEE says it will not release the training data used to create its open models. It is not surprising that several active cases against AI model providers including OpenAI have alleged that these companies have improperly trained their AI model on copyright work.
Openai delayed releasing its open models several times in recent months, partially to overcome safety concerns. Beyond the company’s specific security policies, Openai states in a white paper that he also investigated whether bad actor can be more helpful or more helpful in the manufacture of his GPT-in model in cyber attacks.
After testing from Openai and third-party evaluator, the company says GPT -SS can increase biological abilities marginally. However, it was not evidence that these open models could reach their “high capacity” threshold for danger in these domains, even after fine tuning.
While Openai’s model seems to be state-of-the-art among open models, developers eagerly Deepsek R2, its next AI region model, as well as a new open model from Meta’s Superintendent Lab.

