Summary by Bloomberg AI
- Cisco Systems Inc. is releasing a new chip and networking system to connect AI data centers across hundreds of miles.
- The Silicon One P200 chip and 8223 routing systems allow for faster transfers of data across long-haul optic cables and are smaller than the previous version.
- The new technology is meant to link up far-flung data centers to help them work together to develop artificial intelligence models
By Dina Bass10/08/2025 06:00:07 [BN]
(Bloomberg) — Cisco Systems Inc. is releasing a new chip and networking system meant to connect AI data centers across hundreds of miles, a move that escalates competition with Broadcom Inc.
The Silicon One P200 chip and 8223 routing systems allow for faster transfers of data across long-haul optic cables, the company announced on Wednesday. The components are also much smaller than the previous version, Martin Lund, executive vice president of the company’s common hardware group, said in an interview.
The new technology is meant to link up far-flung data centers and help them work together to develop artificial intelligence models. Previous versions of the Cisco product could work over similar distances but didn’t transfer enough data to be useful for things like AI training — a process that involves bombarding models with massive amounts of information.
Broadcom has taken a similar approach with its products. That company unveiled its latest Jericho networking chip in August, saying it would move larger volumes of data and be ideal for handling AI work across multiple locations.
Though Cisco is a less-recognized player in this market, it’s trying to get the latest equipment to customers faster than Broadcom. Microsoft Corp. and Alibaba Group Holding Ltd., users of the current Silicon One, are examining adopting the P200, the San Jose, California-based company said.
“It’s a little known fact is that Cisco have a complete portfolio that matches Broadcom,” Lund said. “Broadcom is obviously recognized as being a leader, but they are not alone.”
The Cisco chips are reprogrammable, so they can be updated without having to replace them. They also have significant capacity for buffering, or storing incoming data when there’s a burst of activity. That helps prevent the information from getting lost if the destination is too busy — something especially critical given the cost of running graphics processing units, or GPUs, the chips used to train AI.
“The reality is this: Every packet that doesn’t get to the GPU is just like lighting money on fire,” said Cisco President Jeetu Patel said in an interview.