
Presented by Solidigm
As AI adoption grows, data centers are facing a significant bottleneck in storage – and traditional HDDs are at the center of it. Data that once lay idle as cold archives is now being put to constant use to build more accurate models and produce better predictive results. The shift from cold data to hot data requires low-latency, high-throughput storage that can handle parallel computations. HDDs will remain the workhorse for low-cost cold storage, but without rethinking their role, the high-capacity storage layer risks becoming the weakest link in the AI factory.
"Modern AI workloads, combined with data center constraints, have created new challenges for HDDs," says Jeff Janukovich, IDC’s vice president of research. "While HDD suppliers are addressing data storage growth by offering larger drives, this often comes at the cost of slower performance. As a result, the concept of ‘Nearline SSD’ is becoming a relevant topic of discussion within the industry."
Today, AI operators need to maximize GPU utilization, efficiently manage network-attached storage, and scale compute – all while cutting costs on increasingly scarce power and space. In an environment where every watt and every square inch counts, success requires more than technical freshness, says Roger Correll, senior director of AI and leadership marketing at Solidigm. This requires a deep realignment.
“This speaks to a tectonic shift in the value of data for AI,” Coral says. “This is where high-capacity SSDs come into play. Along with capacity, they bring performance and efficiency – enabling exabyte-scale storage pipelines to keep pace with the constant growth of data set sizes. All of this consumes power and space, so we need to do it as efficiently as possible to enable greater GPU scale in this constrained environment.”
Higher capacity SSDs aren’t just displacing HDDs – they’re removing one of the biggest bottlenecks on the AI factory floor. By providing massive gains in performance, efficiency, and density, SSDs free up the power and space needed to advance GPU scale. This is less a storage upgrade than a structural change in how data infrastructure is designed for the AI age.
HDD vs SDD: More than just a hardware refresh
HDDs have impressive mechanical designs, but they are made up of many moving parts that use massively more energy, take up more space, and fail at a higher rate than solid state drives. Reliance on spinning platters and mechanical read/write heads inherently limits input/output operations per second (IOPS), creating bottlenecks for AI workloads that demand low latency, high concurrency, and sustained throughput.
HDDs also struggle with latency-sensitive tasks, because the physical act of seeking data introduces mechanical delays unsuitable for real-time AI inference and training. Furthermore, their power and cooling requirements increase significantly under frequent and intensive data access, reducing efficiency as data scales and warms.
In contrast, SSD-based VAST storage solutions reduce energy usage by ~$1M per year, and in AI environments where every watt counts, this is a huge advantage for SSDs. To demonstrate, Solidigm and VAST Data completed a study examining the economics of data storage at the exabyte scale – a quadrillion bytes, or a billion gigabytes, analyzing storage power consumption versus HDD over a 10-year period.
As a starting point of reference, you’ll need four 30TB HDDs equivalent to the capacity of one 122TB Solidigm SSD. Incorporating VAST’s data reduction technologies made possible by the improved performance of SSDs, the Exabyte solution includes 3,738 Solidigm SSDs versus more than 40,000 high-capacity HDDs. The study found that the SSD-based VAST solution consumes 77% less storage energy.
Minimizing data center footprint
"We are shipping 122-terabyte drives to some of the world’s top OEMs and leading AI cloud service providers." Coral says. "When you compare all-122TB SSDs to hybrid HDD + TLC SSD configurations, they’re getting a nine-to-one savings in data center footprint. And yes, it’s important in these giant data centers that are building their own nuclear reactors and signing huge power purchase agreements with renewable energy providers, but it’s increasingly important as you get into regional data centers, local data centers, and your edge deployments where space can come at a premium."
That nine-to-one savings goes beyond space and power – it lets organizations fit infrastructure in previously unavailable spaces, expand GPU scale, or create smaller footprints.
"If you are given X amount of land and Y amount of electricity, you will use it. you are ai" Correll explains, “Where every watt and square inch counts, why not use it in the most efficient way possible? Get the most efficient storage on the planet and enable more GPUs scale within the envelope you have to fit in. On an ongoing basis, it’s also going to save you operating costs. You have 90 percent fewer storage bays to maintain, and the costs associated with that are gone."
Another often overlooked element, the (much) larger physical footprint of data stored on a mechanical HDD results in a greater manufacturing material footprint. Collectively, concrete and steel production account for more than 15% of global greenhouse gas emissions. By reducing the physical footprint of storage, high-capacity SSDs can help reduce embodied concrete and steel-based emissions by more than 80% compared to HDDs. And in the final stage of the sustainability life cycle, which is end of life, there will be 90% fewer drives for disposal. ,
Reshaping freezing and archival storage strategies
The move to SDD isn’t just a storage upgrade; This is a fundamental restructuring of data infrastructure strategy in the AI age, and it is gaining momentum.
"The big hyperscalers are trying to get the most out of their existing infrastructure, doing unnatural acts, if you like, overprovisioning them close to 90% to try to squeeze out as many IOPS per terabyte as possible with HDDs, but they’re starting to come around," Coral says. "The industry at large will be on that path once they turn to modern all-in-one high-capacity storage infrastructure. Additionally, we are starting to see these lessons learned on the importance of modern storage in AI applied to other segments, such as big data analytics, HPC, and many others."
He added that while all-flash solutions are becoming almost universally adopted, there will always be a place for HDDs. HDDs will persist in uses such as archival, cold storage, and scenarios where the net cost per gigabyte outweighs the need for real-time access. But as the token economy heats up and enterprises realize the value in data monetization, the hotter and hotter data segments will continue to grow.
Solutions to future power challenges
Now in its fourth generation, with over 122 cumulative exabytes shipped to date, Solidigm’s QLC (Quad-Level Cell) technology has led the industry in balancing high drive capacities with cost efficiency.
"We don’t think of storage as just storing bits and bytes. We think about how we can develop these amazing drives that are able to deliver benefits at the solution level," Coral says. "The shining star on that is our recently launched E1.S, which is specifically designed for dense and efficient storage in a direct attach storage configuration for next generation fanless GPU servers."
The Solidigm D7-PS1010 E1.S is a breakthrough, the industry’s first eSSD with single-sided direct-to-chip liquid cooling technology. Solidigm worked with NVIDIA to address the dual challenges of heat management and cost efficiency while delivering the high performance required for demanding AI workloads.
"We are rapidly moving towards an environment where all critical IT components will be direct-to-chip liquid-cooled on the attach side." He says. "I think the market needs to focus on its approach to cooling, because the power limitations, the power challenges, are not going to go away, at least in my lifetime. They need to apply a neocloud mindset to how they are building the most efficient infrastructure."
Increasingly complex heuristics are pressing against a memory wall, making storage architecture a front-line design challenge, not an afterthought. High-capacity SSDs with liquid cooling and efficient designs are emerging as the only way to meet the growing demands of AI. The mandate now is not just for efficiency, but to build storage infrastructure that can scale efficiently as data grows. The organizations that manage storage now will be the ones able to scale AI tomorrow.
Sponsored articles are content produced by a company that is either paying for the post or that has a business relationship with VentureBeat, and they are always clearly marked. Contact for more information sales@venturebeat.com,

