AMD EPYC Rome 7nm Zen 2 processors with 162 PCie Gen 4 channels



[ad_1]

AMD's EPYC Rome processors based on the Zen 2 7nm architecture are not far from being launched since the company is expected to launch them at Computex 2019 next month. While we've looked at the underlying architecture of Rome's processors and Chiplet's (Zen) innovative design, AMD has yet to reveal some key features, but thanks to STH, we know a key feature. upcoming processor that would absolutely crush the competition, if that's true.

AMD EPYC Rome processors up to 162 PCIe Gen 4 channels, or more – twice as much as the Intel Xeon Platinum 9200 flagship product

After some research, ServerTheHome concluded that AMD's AMD EPYC Rome processors would have a higher PCI Express channel count than expected. We know that an EPYC Rome processor switches 128 PCIe Gen 4 channels, but it's mostly the type of double-socket server we're talking about.

Related TSMC accelerates volume manufacturing for 7nm using EUV scanners and expects 5nm by 2020

The dual-socket configurations would contrast directly with the Intel Xeon Platinum 9200 processor family, which is also available in a dual-socket solution. The Intel Xeon Platinum 9200 processors have 40 PCIe Gen 3 channels, and since the 2S solution has two chips, we have a total of 80 PCIe Gen 3 channels. Compare that to a single EPYC Rome processor that already offers more PCIe channels than the double socket solution from Intel. Only Intel 4S and 8P solutions can offer more channels with the specific number of PCIe lanes for each server solution mentioned below (via STH):

  • Xeon Platinum 9200: 2 CPUs with 40 PCIe Gen3 channels each for 80 total channels
  • Xeon Scalable Mainstream: 2 CPUs with 48 PCIe Gen3 channels each for 96 channels in total
  • Xeon Scalable 4P: 4 processors with 48 PCIe Gen3 channels each for a total of 192 channels
  • Xeon Scalable 8P: 8 processors with 48 PCIe Gen3 channels each for a total of 384 channels

Now, the main advantage of AMD over Intel is that PCIe Gen 4 offers them twice as much bandwidth as PCIe Gen 3. That's crucial, as is the updated Infinity fabric used by AMD on their server processors. While the Infinity Fabric relied on PCIe Gen 3 speeds for chip-to-chip communication, the PCI-e Gen 4 would mean that the Infinity Fabric would affect PCI-e capacity less, directly improving the chip chip. , speed of engagement and speed of bandwidth of I / O.

Due to the excess bandwidth available, there would be less trust in x16 links between the two chips and this would offer some flexibility, allowing partners who do not want excess bandwidth to use them for practical purposes rather than on a server. a high speed interconnection. Having only three x16 links instead of four would allow additional PCIe channels to serve outside the IF communication channel.

This would allow additional PCIe Gen 4 connectivity, offering users up to 162 PCIe Gen 4 lanes. It is reasonable to think that most people will not go this route because a lower bandwidth for E Chip / S chip is not an ideal approach, but AMD has given the choice. It is also possible that some clients could access 192 PCIe Gen 4 lanes, which would be possible by disabling two x16 links, but STH indicates that OEMs currently support two 2x inter-socket links (192 PCIe Gen 4), although this offers the same interconnection speeds as the first generation "Naples" EPYC processors.

Related DigiTimes: AMD will see a sharp increase in sales in 2H2019, thanks to 7nm parts

AMD processor roadmap (2018-2020)

Ryzen Family Ryzen 1000 Series Ryzen 2000 Series Ryzen 3000 Series Ryzen 4000 Series
Architecture Zen (1) Zen (1) / Zen + Zen (2) Zen (3)
Process Node 14nm 14nm / 12nm 7 nm 7nm +
Premium server (SP3) EPYC & # 39; Naples & # 39; EPYC & # 39; Naples & # 39; EPYC & # 39; Rome & # 39; EPYC & # 39; Milan & # 39;
Maximum number of cores / server threads 32/64 32/64 64/128 To be determined
Premium Office (TR4) Ryzen Threadripper 1000 Series Ryzen Threadripper 2000 Series Ryzen Threadripper 3000 Series (Castle Peak) Ryzen Threadripper 4000 Series
Max HEDT Cores / Wires 16/32 32/64 64/128? To be determined
Mainstream Desktop (AM4) Ryzen 1000 Series (Summit Ridge) Ryzen 2000 Series (Pinnacle Ridge) Ryzen 3000 Series (Matisse) Ryzen 4000 Series (Vermeer)
Maximum number of hearts and threads 8/16 8/16 16/32 To be determined
Budgetary APU (AM4) N / A Ryzen 2000 Series (Raven Ridge) Ryzen 3000 series (Picasso) Zen +? Ryzen 4000 Series (Renior)
Year 2017 2018 2019 2020

By comparison, the first generation of Naples Infinity EPYC matrices was operating at 10.7 GT / s and it required 4 16-way IF links to meet the bandwidth demand. In EPYC Rome, the infinite structure runs at 25.6 GT / s, more than twice the speed of the first generation EPYC processors. This means that you only need 2 x 16 IF links for chip communication and the more IF links you use, the better the latency and bandwidth will be. One thing to consider though is that the PCIe Gen 4 on the EPYC processors would require a slightly new platform with an updated PCB design.

The other main feature of the new EPYC Rome processors would be the SCH (Integrated Server Controller), also referred to as an independent 14nm I / O chip. In previous processors, AMD had to share many resources, including PCIe lines, relying on low-speed third-party controllers.

AMD plans to provide the SCH with additional processor-based routing for NVMe and other critical I / O, but not necessarily high-speed connectivity devices that would be powered by the main x16 links. This additional channel would not be part of the x16 backbone, but would be an independent link that would be provided to EPYC's Rome I / O chip.

It seems that AMD is in first place if this research is correct. It is already reported that they are gaining double-digit server market share by 2020. Unless Intel makes radical changes to their 10nm Xeon processor family called Ice Lake-SP, the efforts of their server Xeon are not so satisfactory.

[ad_2]

Source link