作者:Dylan Martin
Cornelis Networks CEO Lisa Spelman tells CRN that the channel-driven vendor’s new 400-Gbps CN5000 family of scale-out networking solutions offers the ‘best performance with devastatingly good price-performance,’ and it has bigger plans for the future.
As the growing size and complexity of AI models keeps the tech industry hungry for increasingly faster and more expensive data centers, Intel spin-off Cornelis Networks said its new family of high-speed networking products can help companies save money and increase overall performance for such infrastructure.
In an interview with CRN, Cornelis Networks CEO Lisa Spelman (pictured) said the company’s 400-Gbps CN5000 family of scale-out networking solutions, which launched Tuesday, offers the “best performance with devastatingly good price-performance” compared to offerings based on InfiniBand or Ethernet, including those made by Nvidia.
Cornelis, which also unveiled an updated road map that promises Ethernet compatibility with the next-generation, 800-Gbps CN6000 family in 2026, spun out of Intel in 2020, roughly a year after the semiconductor giant halted development of its Omni-Path architecture for data centers running high-performance computing and AI workloads.
[Related: Ampere Wants To Help Channel Rein In AI Server Costs With Systems Builders Program]
Spelman, a former Intel executive who became CEO of the channel-driven company last year, said Omni-Path is what allows the CN5000 networking products to provide two times higher message rates, 35 percent lower latency and up to 30 percent faster performance for high-performance computing workloads like computational fluid dynamics in comparison to Nvidia’s 400-Gbps InfiniBand NDR solution.
Equipped to support data center deployments of up to 500,000 nodes, the CN5000 family of products—which include SuperNICs, switches, cabling and management software—can also deliver six times faster collective communication for AI applications than RDMA over Converged Ethernet (ROCE) solutions, according to Spelman.
“Everything on paper can look the same when you're saying, ‘Oh, okay, this is a 400-gig NIC, and this is a 400-gig NIC, and this is a 400-gig NIC,’ but it’s the architecture that starts to draw out the performance underneath that, because our assumption is everyone's going to operate at essentially the same bandwidth,” she said last Thursday.
The network improvements, made possible by features like credit-based flow control and dynamic fine-grained adaptive routing, translate to improved utilization for GPUs and other processors at the center of AI data centers that are traditionally not being used to their full advantage because of inefficiency in the network, according to Spelman.
“I really don't think as many people as you would think truly understand how underutilized some of that compute is,” she said.
A point of pride for Spelman is the fact that the ASICs (application-specific integrated circuits) underlying the CN5000 switch and SuperNIC products are the first versions of the chips that have been manufactured, which is rare in the semiconductor industry.
“I’ve been in the industry 25 years, and this is my first time being part of a team delivering first-pass, production-quality silicon success,” she said. “And it just shows the commitment to quality, to engineering excellence, to use of modern tools, including AI [and] emulation—all of that for our design phase, our modeling phase [and] our validation phase.”
By improving the utilization of computer chips across a data center with Cornelis’ Omni-Path-based products, operators of such infrastructure can run larger AI models without needing to expand capacity or use less capacity to run the same size of model.
“It's the first time they're seeing all this optionality in these choices about bigger models or the same model for a lower price, all driven through the network, fundamentally helping solve that compute underutilization challenge,” Spelman said.
The CN5000, which will become broadly available in the third quarter after it starts shipping to customers this month, “seamlessly interoperates” with CPUs, GPUs and accelerator chips from AMD, Intel, Nvidia and other companies, according to Spelman.
However, she added, Nvidia’s GPU-accelerated DGX systems “won’t be an area for us to play in,” but she does see a big opportunity for the CN5000 products with servers that use Nvidia’s HGX baseboard, which connects eight GPUs via NVLink.
“If you look at their HGX offering, which has a lot more flexibility on what additional kind of components go in there, we offer a great alternative there, because we give people the option to do something a bit different than what's in the standard DGX and also really drive up the performance,” Spelman said.
While the CN5000 products are aimed at on-premises deployments for enterprise, government and academic customers running AI and HPC workloads, Cornelis sees substantially bigger market opportunities with the next two generations of products.
With next year’s 800-Gbps CN6000 products, for instance, Cornelis plans to enable Ethernet compatibility with ROCE support that will allow Ethernet-based networks to access some of Omni-Path’s features, according to Spelman.
Spelman said this integration, which Cornelis has been developing with a “cloud-definitional customer,” will help the company start to pursue the cloud infrastructure market and stand apart from other Ethernet networking startups.
“It was the innovation around creating the adaptation layer that we've built in that allows you to get access to those [Omni-Path] features, because otherwise we're just another standard Ethernet vendor banging our head against a big wall,” she said.
Then in 2027, the company plans to release its 1.6-Tbps CN7000 products, which will integrate standards from the widely supported Ultra Ethernet Consortium into Omni-Path. They will also feature support for 2 million nodes and in-network computing.
This will establish a “new performance frontier for the most demanding AI and HPC environments,” according to Cornelis.
Spelman said one of the reasons why she’s convinced Cornelis can become a formidable player is because the company has the “most advanced” features and capabilities in the industry, some of which are now being implemented by other companies for their own products.
She also said standards being developed by the Ultra Ethernet Consortium—which counts Cornelis, Intel, Broadcom, AMD, Nvidia and others as members—show that the “ideal scale-out network looks a lot like the Omni-Path architecture.”
“So from the AI perspective, I think we're absolutely on the right track with the features, capability and performance, because we're now seeing some of the larger AI players literally emulating the base architecture we have here,” Spelman said.
Given Cornelis’ roots of selling high-speed interconnects for HPC workloads before AI became a worldwide phenomenon, “channel partners have always been super critical” for the company’s go-to-market efforts, Spelman said.
As such, the company runs a global partner program with a growing roster of partners that includes Ace Computers, Advanced Clustering Technologies, Colfax International, Microway and Penguin Solutions. Cornelis also works closely with OEMs like Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro.
Spelman said Cornelis last fall hired Intel channel veteran Mike Lafferty, who previously ran the semiconductor giant’s HPC board of advisors, to grow the partner network. And last month the company hired former Intel sales executive Brad Haczynski, who was most recently at Samsung, as chief commercial officer.
“The fact that we're a startup, I think we're doing a good job of being focused in this space, because we recognize how important it is,” Spelman said.
Mako Furukawa, a senior sales engineer at Wheat Ridge, Colo.-based Cornelis partner Aspen Systems, said the company is strengthening its partnership with Cornelis to implement Omni-Path solutions “across our education, government and commercial customers.”
“The new CN5000 line represents a significant advancement in the Omni-Path architecture that our customers have been eagerly anticipating,” said Furukawa in a statement.