Subscribe to our Newsletter!
By subscribing to our newsletter, you agree with our privacy terms
Home > IT Monitoring > Data Center > Cisco and NVidia to offer AI infrastructure for datacenters
March 18, 2024
Cisco and NVidia will together offer Artificial Intelligence (AI) infrastructure solutions for datacenters, with the promise of being easy to deploy and manage. The partnership between a company with a long history of success in the world of networks, with a broad ecosystem of partners, and another that reigns in the world of GPUs and has driven the AI boom, now wants to deepen cooperation in the field of datacenters with scalable and automated AI cluster management solutions, automatic troubleshooting systems, among other features.
“Artificial Intelligence is fundamentally changing the way we work and live, and history shows that changes of this magnitude require rethinking and re-architecting infrastructures. By strengthening our strong partnership with NVidia, we will be able to arm companies with the technologies and expertise needed to build, deploy, manage and secure AI solutions at scale,” said Chuck Robbins, president and CEO of Cisco.
“Companies around the world are rushing to transform their businesses using Generative AI. By working closely with Cisco, we are making it much easier for them to have the infrastructure they need to benefit from AI, the most powerful technological force in our existence,” said Jensen Huang, founder and CEO of NVidia.
The following are integrated solutions for datacenters with immediate availability:
“Companies are looking to transform their businesses by exploiting AI, but they must understand the unique demands that AI workloads place on datacenter infrastructures. The partnership between Cisco and NVidia brings together two trusted brands with complementary technologies that will give customers the means to harness the full potential of AI with a wide range of performance-optimised Ethernet infrastructures,” says Vijay Bhagavath, IDC Research vice president for cloud and datacenter networking.
A senior executive at Corning Optical Communications explained in an article for the DCD website that an AI network within a datacenter is essentially a network within a network. And within the AI network, GPUs and CPUs function like the two halves of the human brain. Large “server farms” with this configuration can act like a supercomputer, speeding up the training time of AI models.
The executive exemplifies this by saying that when someone asks a digital assistant a question, the networked AI functions will analyse incalculable volumes of data and possible answers in real time. And as these answers become faster, more precise and more ‘human’, these assistants will become more useful and more integrated into everyday life.
Because of the search for this more natural AI that is more incorporated into the daily lives of people and companies, the demand for AI networks is expected to generate “bandwidth opportunities” within datacenters valued at up to $6.2 billion in sales by 2027, according to financial analysts at Raymond James.
Adding to the chorus, a recent report by the 650 Group indicates that the networking market for AI datacenters is expected to reach more than $15 billion by 2024, led by Ethernet transceivers, InfiniBand and 400/800G optical models. The study also revealed that Arista and NVidia are the two main suppliers in this segment.
In relation to Artificial Intelligence specifically, it is expected that by 2028, one in five Ethernet switch ports sold to datacenters will be related to AI, machine learning (ML) and accelerated computing. In particular, AI/ML will drive record levels of 800 Gbps optical ports during the first 18 months of adoption. In addition, the 650 Group emphasises that customers continue to evaluate different network topologies to meet AI/ML network demands.
“The year 2023 saw significant growth in vendor revenue related to networks for AI/ML. And 2024 should see even more growth with many customers’ proofs of concept moving to production networks,” says Alan Weckel, technology analyst at 650 Group. “With the growth of bandwidth for AI, the part of Ethernet switching linked to AI/ML and accelerated computing will no longer be restricted to a niche today but will be a significant part of the market by 2028. We’re about to see record sales of 800 Gbps switches and optical models once products reach production scale to serve the AI/ML segment.”
August 23, 2024
August 17, 2024
August 15, 2024
August 14, 2024
August 09, 2024
July 31, 2024
July 27, 2024
Previous
Alphabet spin-off defines new datacenter concept
Next
Microsoft to test sub-THz links for use in datacenters