
Follow ZDNET: Add us as a favorite source On Google.
Key takeaways of zdnet
- Frontier model passed the CFA level III exam.
- Less than half (human) candidates passed the exam in February.
- AI is getting better in some tasks rapidly.
Some tasks that demand humans see Hercuulian cognitive efforts are trivial for the AI system, designed to detect and repeat complex patterns shining from giant trips of data. Technology has already scored top marks in world class competitions in mathematics and coding; Soon – according to the vision of some developers – help human researchers make new scientific discovery.
Also: Openai tested GPT-5, Cloud and Gemini on real-world functions-Surprising was surprising.
Now, AI’s capabilities are also rapidly holding with the most efficient human financial analysts.
A New study Placed by New York University Stern School of Business and AI-Powered Wealth Management platform Goodfin found that some Frontier AI models were successfully capable of passing a mock version of Chartered Financial Analyst (CFA) level III examination, which is widely considered as the world’s most difficult and reputed test benchmarks.
Pass the test
The study analyzed 23 industry-agar models from developers such as Google, Openai, Enthropic, Meta, XAI and Deepsek, both the models, both proprietary and open-sources capabilities.
Also: AI helps the strong Dev teams and harms weak people, according to Google’s 2025 Dora report
Previous studies had shown that AI CFA could pass level I and II exams, but it struggled with the third and final (and most difficult) phase. Level III exam, which are designed to test the ability to apply their knowledge to the fictional real-world scenarios about the candidates’ portfolio management and the money scheme, include a set of multi-favorite as well as essay questions.
“This dual format widely tests high-order cognitive skills, including analysis, synthesis and professional decisions on rotten memorization,” the new study has written in A by researchers behind the new study. paper It was originally posted on the preprint server site Arxiv in June. “Hard standard of exam … make it an excellent benchmark to assess advanced financial logic capabilities.”
Half (49%) of human examinees passed the level III exam in February, CFA Institute,
Result
A handful of logic models, who are experts in breaking problems, then in a series of sub-problems, which are then dealt with gradually, were able to pass the mock examination.
Also: This app will pay you $ 30/day to record your phone call for AI – but is it worth it?
Openai’s O4-Mini topped the top position with a overall score of 79.1%, followed by Gemini 2.5 flash of Google, which scored 77.3%. The passing threshold for the exam is 63%.
Researchers noted in their paper that while most models were included in the study, scored in the same range on the multiple choice segment of the examination (about 71%-75%), their scores differ more widely on the more rigorous and challenging essay section.
“It suggests that simple, more simple tasks are commoditized in models, while complex and fine arguments still separate the frontier and argument-growing models from their peers,” they write.
Also: How can AI help you manage your finance (and what to see)
A recent report from Microsoft listed individual financial advisors as one of the forty job categories, which is likely to handle by AI. Nevertheless, Anna Zoo Fee, Founder and CEO of Goodfin, Told CNBC That he is not worried about immediate replacement.
“There are things like references and intentions that are difficult for the machine to assess right now,” he said. “This is where a human shines, in understanding your body language and cues.”

