[ad_1]
AMD may have revealed its future 7 nm Epyc processors at its New Horizons event, but it has only touched on many architectural improvements for the core. We know that the chip associates a series of 7nm chips (each containing eight processor cores), but specific details about the cache organization or CCX design have not yet been revealed. SiSoft, a new data point provided by Sandra, suggests that AMD has doubled the amount of N3 cache per processor core, at least on Epyc.
The original entry in the SiSoft Sandra database has been removed, but not before its capture by Overclock3D.net. Screen capture involves a 1.4 GHz engineering sample, with a boosted clock at 2 GHz and 128 threads, implying that SMT is not activated at this point. Neither the low clock nor the lack of SMT support are worrying; Engineering samples often have disabled features and Epyc should not be launched until 2019.
Doubling the total amount of L3 cache per heart is a movement expected by AMD and should improve the performance of Epyc as a whole. AMD's existing CCX implementation allocates 8 MB of L3 per CCX, with two CCX per chip. Ping times between logical cores are approximately 26ns for the same processor core, 42ns for the same CCX and 142ns for a different CCX from the same physical dice. It's not much better because of the memory latency that you experience when you access main memory to retrieve data that way.
Basically, that means that Epyc does not have an L3 of 64 MB, which is a sense. It has 8 L3 caches of 8 MB each. This works very well for applications that can fit into an 8MB cache slice, but Epyc annoyance for any application that does not match this access model. As this Anandtech reference shows in terms of memory latency, Epyc memory latency in random dual reads is rather competitive below 8 MB and significantly lower than that of Intel. above this point.
Doubling the amount of N3 cache per array will obviously improve performance in applications that can fit into a 16 MB access pool but not in an 8 megabyte slice. I would caution, however, against the conclusion that is the only change made by AMD to the general organization of Epyc. The decision to organize Epyc as a 7-nm chipset connected to a common I / O chip will have an impact on heart-to-heart communication. The way things will evolve with AMD's Silicon Rome is unclear, as the company has not released this information yet, but many buttons and dials that AMD could have edited. In addition to the physical changes we are familiar with, 7nm, Epyc incorporates potential caching policy changes, Infinity Fabric enhancements, CCX design changes, and even changes in the way AMD handles the change. energy consumption in its caches, which could possibly affect the latency of memory. Knowing that the company has probably doubled the L3 cache tells us a lot about Rome, but that's not all.
The impact of this change on Ryzen's desk is not clear. AMD may choose to keep the same L3 cache size per array or merge some L3 components to recover faulty chips or differentiate Epyc and Ryzen components. The company's first Ryzen used the same silicon in all product families to the greatest extent possible, but some second-generation Ryzen 5 processors have smaller L3 caches (8 MB on the Ryzen 5 2500X, compared to 16 MB on the Ryzen 5 1500X).
Now read: AMD 7nm Epyc Processor Offers Core Enhancements, Huge Performance Gains, Nvidia Tesla and AMD Epyc to Power New Berkeley Supercomputer, AMD Announces Significant Results, Improves Gross Margin Significantly
[ad_2]
Source link