One Analysis By a non -profit AI Research Institute, by APOTH AI, suggested that the AI industry may not be able to achieve large scale performance benefits by arguing the AI model for a very long time. According to the findings of the report, as soon as within a year, the progress from the logic model can slow down.
The logic model like O3 of Openai has gained adequate benefits on the AI benchmark in recent months, especially the benchmark measuring monasteries and programming skills. Models can apply more computing to problems, which can improve their performance, with this negative side that they take longer than traditional models to complete tasks.
The logic model is first developed by training in a large amount of data to a conventional model, then applied to a technique learning reinforcement, which effectively “feedback” the model to the solution of difficult problems.
So far, Frontier AI Labs like Openai has not applied a large amount of computing power in the phase of reinforcement of the Reasoning Model Training.
He is changing. Openai has stated that it has applied about 10x more computing to train O3 compared to its predecessor, O1, and Epoch, and most of this computing was dedicated to learning reinforcement. And Openai researcher Dan Roberts recently revealed that the company’s future plans calls for this Preference to learning reinforcement To use more computing power, also more for initial model training.
But to learn the reinforcement that is still an upper bound, how much computing can be applied according to the era.

Josh U, an analyst of APOTH, and authors of analysis, suggests that performance benefits from standard AI model training are currently quadrupling every year, while performing reinforcement learning is increasing ten times every 3-5 months. The progress of logic training will probably converge with the overall range by 2026, “that continues.
The analysis of Epoch creates many beliefs, and AI participates on public comments from company officials. But it also makes a case that scaling region model can prove challenging for reasons in addition to computing, including high overhead costs for research.
“If there is a frequent overhead cost required for research, the logic model may not be as expected,” you write. “Rapid compute scaling is potentially a very important component in arguing model progress, so it is worth tracking it closely.”
There is any indication that logic models can reach some kind of extent in the near future, the AI industry is likely to worry, which has invested heavy resources that develop such models. Already, studies have shown that logic models, which can be incredibly expensive to run, have serious flaws, such as some traditional models to have more hallucinations than in model models.