AMD unveiled its widespread end-to-end integrated AI platform vision and introduced its open, scalable rack-scal AI infrastructure built on industry standards at its annual advanced AI event.
Santa Clara, California -based chip manufacturer announced its new AMD Instinct Mi350 series accelerator, which is four times faster on AI compute and 35 times faster than the former chips.
AMD and its partners demonstrated the continuous growth of AMD instinct-based products and AMD ROCM ecosystem. It also showed its powerful, new, open rack-scal design and roadmap which brings the leadership rack scale AI performance beyond 2027.
“Now we can say that we are at the invention inflation point, and it will be the driver,” AMD CEO Lisa SU said, in a keynote speaker at the Advanceing AI event.
In closing, in a jab in Nvidia, he said, “The future of AI will not be built by a company or within a closed system. It will be shaped by open cooperation in the industry, in which all people bring their best ideas.”

AMD unveiled the Vritti Mi350 series GPUS, which establishes a new benchmark for performance, efficiency and scalance in generative AI and high-performance computing. The Mi350 series, consisting of both instinct Mi350X and Mi355x GPUs and platforms, distributes four times generation-generation AI compute growth and 35 times generation leap in hening, which paves the way for transformative AI solutions in industries.
On stage with Lisa Su on stage, Open AI CEO Sam Altman said, “We are very excited about the work that we are doing in AMD.”
He said that when he heard about the glasses from AMD to MI350, he could not believe it, and he was grateful that AMD took his company’s response.

AMD performed end-to-end, open-standards rack-scale AI infrastructure-first AMD Instinct Mi 350 series accelerator, 5th gene AMD EPYC processor and AMD Pennsando Polar Network Interface Card (NIC) such (OCI) rolled out. Is called Helios.
It will be made on the next generation AMD Instinct Mi400 series GPU, Zen 6-based AMD Epic Venice CPU and AMD Pencents Valcono NICS.
“I think they are targeting a different type of customer compared to Nvidia,” analyst Ben Bajreen, analyst of creative strategies, said, in a message to gamesbits. “Especially I think they see an entire host of Neoclaude opportunity and Tier to and Tier Three Clouds and On-Primes Enterprises Perinogen.”
Bajreen said, “We are fast on the shift in full rack -cultivation systems and this is where helios fits in which Rubin will align with timing. But as a market change, which we are in the beginning, AMD is well deployed to compete to occupy the share. For the customer can be a very different customer profile.
AMD Open-SOS AI software stack, the latest version of ROCM 7, is an engineer to meet the growing demands of generative AI and high-demonstration computing workloads-while the developer experience improves dramatically across the board. (Radeon Open Compute is an open-source software platform that allows for GPU-quick computing on AMD GPU, especially for high-demonstration computing and AI workload). ROCM 7 has better support for industry-standard framework, extended hardware compatibility, and better support for new growth equipment, drivers, APIs and libraries to accelerate AI development and deployment.
In his keynote speaker, Su said, “Openness should be more than only a discussion word.”
The instinct Mi350 series AI crossed the AMD’s five-year target to improve the energy efficiency of training and high-demonstration computing nodes 30 times, eventually 38 times. AMD also unveiled a new 2030 target to provide a 20-fold increase in rack-scale energy efficiency from the base year of 2024, enabling a specific AI model that today more than 275 racks need to be trained in fully used racks by 2030, using 95% low power.
AMD also announced the comprehensive availability of AMD developer Cloud for Global Developer and Open-SOS communities. Rapid, purpose-made for high-demonstration AI development, users will have access to a fully managed cloud environment with equipment and flexibility to start with AI projects-and grow without limit. With RocM 7 and AMD developer Cloud, the AMD is reducing obstacles and expanding access to the next-gyne calculation. Strategic cooperation with leaders such as Hugging Face, Openi and Groke are proving to be the power of open solutions. The announcement received some cheers from people in the audience, as the company said that it will give the developer credit to the attendees.
Broad partner ecosystem shows AI progress operated by AMD

AMD customers discussed how they are using AMD AI solutions to train today’s leading AI model, gaining power entrance and AI exploration and development on scale.
Meta expanded how it has taken advantage of several generations of AMD instincts and EPYC solutions in its data center infrastructure, with the Instinct Mi 300X largely deployed for Lama 3 and Lama 4. Meta continues to work closely with AMD on AI Roadmap, including plans to avail the Mi350 and MI400 series GPU and platforms.
Oracle Cloud Infrastructure AMD instinct is one of the first industry leaders to adopt AMD Open Rack-Scal AI Infrastructure with Mi 355X GPU. The OCI AMD takes advantage of CPU and GPUs to give a balanced, scalable performance to AI clusters, and announced that it will offer Zettascale AI cluster, which will be trained by the latest AMD Instinct processor to make customers by making, trained, trained, trained and trained.

Microsoft announced that Instinct Mi300x is now strengthening both ownership and open-source models in production on Azure.
Human discussed its landmark agreement with AMD, which can only provide AMD to take advantage of the complete spectrum of computing platforms to build open, scalable, flexible and cost-skilled AI infrastructure.
In Keenote, Red Hat explained how its extended cooperation with AMD enables the production-AI environment, AMD instinct provides a powerful, skilled AI processing in the Red Hat Openceift AI Hybrid Cloud atmosphere on the instinct GPU.
“They can get the most from the hardware they are using,” said Red Hat on the stage.
Astera Labs highlighted how the open ualink ecosystem intensifies innovation and provides more value for customers and shares a plan to offer a wide portfolio of UALINK products to support the next generation AI infrastructure. Marvell AMD joined the AMD to share the UALINK switch holiday, which firstly open interconnected, for Ultimate flexibility, for AI IFFIMATIVE.