Intel switches to chipsets: ‘Client 2.0’ for 7 nm



[ad_1]

One of the more esoteric elements of Intel Architecture Day 2020 has come very near the end, where Intel spent a few minutes discussing what it thinks is the future of some of its products. . Brijesh Tripathi, vice president and chief technical officer of Intel Client Computing group, presented a vision for the future of its customer products to 2024+. Focused on Intel’s 7+ manufacturing process, the goal was to enable “Client 2.0” – a new way to deliver and enable immersive experiences through a more optimized silicon development strategy.

Chips aren’t new, especially with recent launches from Intel’s competitors, and as we move into the development of more complex process nodes, the age of chips allows for faster time to market as well as better groupings and yields for a given product. The key is how these chiplets fit together and at what points it makes sense to mix and match the relevant elements. Intel has already spoken about this in a more general context, during its Technology and Manufacturing Day 2017, as shown in the carousel image at the top.

The goal here is to mix and match the process nodes that work best for different parts of the chip. Intel seems poised to realize this vision by starting with its 7nm platform. On Architecture Day 2020, Brijesh Tripathi showed this slide:

On the left is a typical chip design – monolithic with everything it needs. For advanced Intel products, their development takes 3-4 years, and bugs are detected in silicon by Intel initially, and then by Intel partners, as they can increase silicon uptime by a few orders of magnitude.

In the middle is a basic chiplet layout, similar to this slide from 2017, where different functions of the dice are split into their own modules. Assuming consistent interconnection, there is some reuse of silicon elements, such as AMD using the same basic computational matrices in both client and server. For some semiconductor companies (except Intel), this is where we are.

On the right, Intel sees its future. Instead of having a single digit number of chips in a product, he envisions a world where each IP can be split into multiple chips, allowing products to be built with different configurations of what works for the market. In this case, a chiplet can be a PCIe 4.0 x16 link – if the product needs more, it simply adds more of those chiplets. Ditto with memory channels, cores, media accelerators, AI accelerators, ray tracing engines, crypto accelerators, graphics, or even up to SRAM and caching blocks. The idea is that each IP address can be split and then scaled. This means that chips are tiny, can be built relatively quickly, and bugs need to be eliminated very quickly.

In this diagram, we are treated with Intel’s long-term vision for the customer – a base interposer with memory built into the package (something like an L3 or L4) that can serve as the primary SRAM cache for the whole. of the chip, then on top of that we get 24 different chips. Chipsets can be graphics, cores, AI, media, I / O, or whatever, but they can be mixed and matched depending on what is needed. A content creator may want a balance of good graphics acceleration and computation, while a gamer may want to focus only on graphics. An enterprise customer or workstation may need less graphics and more for compute and AI, while a mobile version of the chip will be heavily invested in I / O.

As always, there is a trade-off between the size of the chips and the complexity of putting them together in a multi-die arrangement. Any communication between chipsets costs more power than a monolithic interpretation, and generally offers higher latency. The thermals must also be managed, and therefore sometimes these chips are limited by the thermal properties available. Multi-chip arrangements also cause headaches for mobile devices, where z-height is critical. However, the benefits of using the right process at the right time for the right product are significant, as it helps deliver both performance and power at the lowest possible cost. This also gives the possibility of bringing 3rd Party IP quickly if something extraordinary hits the scene.

The only downside here is that Intel hasn’t talked much about the glue that binds all of this together. Chiplet strategies are based on complex high-speed interconnection protocols, customized or not. Current uses of Intel’s die-to-die connectivity are either simply memory protocols or extensions of the FPGA fabric – the most important for server processors like UPIs are not necessarily up to the task. CXL could be the future here, but the current CXL is built on PCIe, which means a complex CXL / PCIe controller for every chip that’s likely to go hungry quickly.

Intel has said it is inventing new packaging technology and new levels of connectivity to act between silicon – there is no disclosure on the protocols yet, but Intel acknowledges that to achieve this level of scale , it will have to go beyond what the company has today, and that will require creating standards and innovation in this area. The goal is to create and support standards, and the first incarnation will have some built-in standardization. Intel states this is an extreme disaggregation method, and note that not everything connected does not have to be high bandwidth (like USB) or consistent interconnect – Intel sees the goal involving a handful of protocols across the spectrum.

There is also the developer market, which could be used for a more consistent implementation of resources in a given product. Without careful planning and proper coding, some chipset setups may fall apart if the developer expected a certain ratio of compute to graphics, for example. It’s not something that OneAPI could easily fix.

These are all issues that Intel will need to resolve, although they have a few years to go before that materializes. We’ve been told the internal name is Client 2.0, although there will likely be more marketing apparel added as Intel starts talking about it in more detail.

Related reading

[ad_2]

Source link