An official website of the United States government
A .mil website belongs to an official U.S. Department of Defense organization in the United States.
A lock (lock ) or https:// means you’ve safely connected to the .mil website. Share sensitive information only on official, secure websites.

Home : Media : News : Article View
NEWS | May 22, 2025

A New Strategy for High-Performance Computing at Carderock 

By Alisha Tyer, NSWC Carderock Division Public Affairs

When Naval Surface Warfare Center Carderock’s engineers are modeling failure conditions, simulating acoustics or optimizing hydrodynamics, they need massive amounts of data to be processed and computing power to do it.  Fortunately, Carderock received a major upgrade to its High-Performance Computing (HPC) capability May 9, 2025, increasing the speed and scale across the command. This upgrade, beyond a typical “tech refresh,” changes how Carderock maintains its technical advantage.

Through a first-of-its-kind leasing effort, Carderock secured modern, 8,300-core HPC servers. These promise faster turnaround, reduced lag in procuring new systems, greater efficiency, and fewer obstacles for technical teams tackling critical naval research programs.   

HPC is not a novel concept to Carderock. The Ship Engineering and Analysis Technology Center (SEATech) has managed HPC and related resources for design and engineering since 1988, administered by the Office of the CIO & Information Technology Division since 1994. However, before the recent lease, acquiring high-performance computing power at Carderock was a slow, piecemeal process.  

Funding limits — particularly the $500,000 cap on service cost center (SCC) purchases — meant server clusters were bought in stages over the course of years. This resulted in building systems node by node, often replacing or expanding based on affordability rather than necessity.  

“There’s a high demand from the tech codes that they need more compute power, so we will buy a portion of the cluster based on the budget,” said Patricia McCarthy, head of the Enterprise Services Branch (Code 1044), “but we can only buy part of a cluster, and we have to split purchases year to year between classified and unclassified, high side and low side. By the time we get back to the original cluster, it may be four years later and it’s out of support [contract]. Whatever cores your project runs on will operate at the lowest common denominator of speed.”  

The result was a patchwork of compute nodes from different years and generations, some dating back to fiscal year 2020, others from FY21, FY24, or later. By the time the newest arrived, the oldest were unsupported, causing uneven performance and bottlenecks.  

For engineers and researchers, that often meant longer wait times, delayed results, or the frustration of managing jobs across inconsistent hardware. Carderock’s existing infrastructure couldn’t keep up with demand, and maintaining the status quo meant falling further behind.  

The shift came with a single question: “What if we leased instead?"  

“It was Mike Kirby who proposed the idea to me,” said McCarthy. “I went back to my cluster vendors, and they said, it’s funny you should ask, we’re just starting to get into that market.”  

With support from Senior Technologist Jonathan Stergiou, and other technical and contracting advisors, Kirby and McCarthy developed a strategy for faster, more powerful computing without traditional capital acquisition constraints.   

“We realized we could lease an 8,300-core cluster for only slightly more than what we were already spending for a 3,000-core cluster,” she said. “We had to ensure the lease didn’t trigger capital investment rules, which meant carefully reviewing the terms, the lifespan of the equipment, and whether any purchase options were included. It took a lot of coordination to make it work.”  

A collaborative effort across technical, programmatic, and contracting teams resulted in a first-of-its-kind lease agreement for Carderock, delivering more than double the computational power without doubling the costs.  

Because the system was acquired as a single, homogeneous cluster, users no longer compete for newer nodes or delay jobs while awaiting new servers. Every node offers the same high processing power, enabling better scheduling and more predictable performance.  

The new HPC clusters also streamline workflow, allowing users to run full-scale simulations and refine large jobs locally before submitting them to Defense Department supercomputing centers.  

For engineers like Ben Mullen, a Computational Fluid Dynamics (CFD) engineer, the new system isn’t just faster, it’s transformational. The increased core capacity allows his team to run larger, more complex simulations in significantly less time.    

“These new HPC clusters significantly boost our workflow and efficiency. We can tackle more complex simulations, manage data more easily, and even de-risk large jobs by running full-scale tests before sending them to DoD systems. It’s a huge improvement in both flexibility and productivity,” said Mullen.   

Engineers across the command, from CFD modelers to hydroacoustic analysts, now have faster, better access to the computing power they need. This includes detachments and offsite teams previously lacking local resources for high-fidelity jobs.   

“They already know how to model, how to grid, how to set parameters,” said Peter Clark. “What they’ve needed is a bigger sandbox, and now they have it.”  

For junior engineers, the upgrade also supports mentorship. McCarthy noted that most users learn by shadowing experienced colleagues. Reliable, responsive systems make that process smoother, and helping the next generation of innovators work faster.  

The HPC lease team’s innovation earned the 2024 Department 10 Honorary Team Award, highlighting collaboration across technical, programmatic, and contracts personnel. By leveraging a flexible and innovative acquisition model, the command set a precedent for smarter, faster, and more adaptable procurement.  

Maintaining the system is as crucial as acquiring it. The lease agreement includes a full-time onsite engineer from Penguin Computing working alongside SEATech staff on system health, covering routine updates, patching, zero-day response and node troubleshooting.  

Looking ahead, Carderock is exploring cloud-based HPC solutions. In the coming months, select engineers will test workloads in a secure GovCloud environment in partnership with Amazon Web Services. The goal is to compare performance, flexibility, and cost-effectiveness as the command prepares for future hybrid computing needs.  

Whether through future leases, cloud partnerships, or evolving storage strategies, Carderock’s approach to increasing capacity and capability is evolving.