Apple announces the Apple Silicon M1: abandon the x86



[ad_1]

Today, Apple unveiled its brand new line of MacBooks. This isn’t an ordinary version – if anything, the move Apple is making today is something that hasn’t happened in 15 years: the start of a processor architecture transition to its entire range of consumer Macs.

Thanks to the vertical integration of the company through hardware and software, this is a monumental change that no one other than Apple can make so quickly. Intel x86 designs. Today, Intel is being phased out in favor of the company’s own in-house processors and CPU microarchitectures, based on Arm ISA.

The new processor is called Apple M1, the company’s first SoC designed for Macs. With four large performance cores, four efficiency cores and an 8-processor GPU GPU, it features 16 billion transistors on a 5nm process node. Apple is rolling out a new SoC naming scheme for this new processor family, but at least on paper it looks a lot like an A14X.

Today’s event contained a ton of new official announcements, but was also lacking in detail (in typical Apple fashion). Today we’re going to dissect the new Apple M1 news, as well as take a microarchitectural dive based on the already released Apple A14 SoC.

The Apple M1 SoC: an A14X for Mac

The new Apple M1 is truly the start of a major new journey for Apple. During Apple’s presentation, the company didn’t really divulge many details about the design, but there was a slide that told us a lot about the packaging and architecture of the chip:

This style of packaging with DRAM integrated into organic packaging is not new to Apple; they use it from the A12. However, this is something that is only used sparingly. When it comes to high-end chips, Apple likes to use this type of packaging instead of your usual POP smartphone (packaging over packaging), as these chips are designed with higher TDPs in mind. Thus, keeping the DRAM on the side of the compute matrix rather than on it helps ensure that these chips can still be cooled efficiently.

This also means that we are almost certainly looking at a 128-bit DRAM bus on the new chip, much like that of the previous generation AX chips.

On the same slide, Apple also appears to have used a real dice from the new M1 chip. It perfectly matches the specifications Apple describes of the chip and looks like a real photo of the chip. Indicate what is probably the fastest dice annotation I have ever made:

We can see the M1’s four high performance Firestorm processor cores on the left side. Notice the large amount of cache – the 12MB cache was one of the surprise revelations of the event, as the A14 still only had 8MB of L2 cache. The new cache here appears to be split into 3 larger blocks, which makes sense given Apple’s transition from 8MB to 12MB for this new setup, it is now used by 4 cores instead of 2 after all.

Meanwhile, the 4 Icestorm efficiency cores are near the center of the SoC, above which is the SoC system level cache, which is shared among all IP blocks.

Finally, the 8-core GPU occupies an important place in the matrix and is located in the upper part of this matrix.

What’s most interesting about the M1 is how it stacks up against other CPU designs from Intel and AMD. All of the aforementioned blocks still cover only part of the entire matrix, with a significant amount of auxiliary IP. Apple mentioned that the M1 is a true SoC, including the functionality of what were previously several discrete chips inside Mac laptops, such as I / O controllers and SSD and security controllers. Apple.

The new processor core is what Apple claims to be the fastest in the world. This will be a focal point of today’s article as we dive deeper into the microarchitecture of the Firestorm cores, as well as the performance numbers of the very similar Apple A14 SoC.

With its additional cache, we expect the Firestorm cores used in the M1 to be even faster than what we’ll be dissecting today with the A14, so Apple’s claim to have the core fastest processor in the world seems extremely plausible.

The entire SoC includes 16 billion transistors, which is 35% more than the A14 inside the latest iPhones. If Apple has managed to keep the density of transistors between the two chips similar, one would expect a chip size of around 120mm². It would be considerably smaller than the previous generation of Intel chips inside Apple’s MacBooks.

Road To Arm: Second verse, identical to the first

Ryan Smith section

That Apple can even pull off a major architectural transition so seamlessly is a small miracle, and Apple has quite a bit of experience in accomplishing it. After all, this isn’t Apple’s first processor architecture for their Mac computers.

Longtime PowerPC company came to a crossroads in the mid-2000s when the Apple-IBM-Motorola alliance (AIM), responsible for PowerPC development, found it increasingly difficult to continue developing the chip. IBM’s PowerPC 970 (G5) chip shows respectable performance on desktop computers, but it consumes a lot of power. This left the chip unviable for use in the growing laptop segment, where Apple still used Motorola’s PowerPC 7400 (G4) series chips, which had better power consumption, but not the performance it needed. to compete with what Intel would ultimately achieve with its Main Series of processors.

And so, Apple played a card they had in store: the Marklar project. Taking advantage of the flexibility of Mac OS X and its underlying Darwin kernel, which like other Unixes is designed to be portable, Apple maintained an x86 version of Mac OS X. Although initially widely viewed as an exercise good coding practices – Making sure Apple was writing OS code that was not unnecessarily tied to PowerPC and its big-endian memory model – Marklar became Apple’s exit strategy from a PowerPC ecosystem stagnant. The company would switch to x86 processors – especially Intel’s x86 processors – disrupting its software ecosystem, but also opening the door to much better performance and new opportunities for customers.

The switch to x86 was in every way a big win for Apple. Intel processors delivered better performance per watt than the PowerPC processors that Apple left behind, and especially once Intel released the Core 2 (Conroe) series of processors in late 2006, Intel firmly established itself as the dominant force of PC processors. This ultimately configured Apple’s trajectory over the next several years, allowing them to become a laptop-focused company with proto-ultrabooks (MacBook Airs) and their incredibly popular MacBook Pros. Likewise, x86 brought compatibility with Windows, introducing the ability to boot Windows directly, or run it in a virtual machine at very low cost.

The cost of this transition, however, came on the software side. Developers should start using Apple’s latest toolchains to produce universal binaries that could work on PPC and x86 Macs – and all of Apple’s previous APIs wouldn’t make the jump to x86. The developers of course took the leap, but it was an unprecedented transition.

Rosetta, Apple’s PowerPC translation layer for x86, has filled the gap, at least for a while. Rosetta would allow most Mac OS X PPC applications to run on x86 Macs, and while performance was a bit hit and miss (PPC on x86 isn’t the easiest thing), higher performance from Intel processors. helped transport things for most non-intensive applications. In the end, Rosetta was a band-aid for Apple, and an apple got ripped off relatively quickly; Apple already ditched Rosetta by the time of Mac OS X 10.7 (Lion) in 2011. So even with Rosetta, Apple has made it clear to developers that they expect them to update their apps for x86’s. they wanted to keep selling them and keep the users happy.

Ultimately, the PowerPC to x86 transitions set the tone for the modern and agile Apple. Since then, Apple has created a whole development philosophy of going fast and changing things as they see fit, with limited concern for backward compatibility. This gave users and developers few options but to enjoy the ride and keep up with Apple’s development trends. But it also gave Apple the ability to introduce new technology early and, if necessary, break old apps so new features weren’t held back by backward compatibility issues.

All of this has happened before, and it will all happen again starting next week, when Apple launches its first Apple M1-based Macs. Universal binaries are back, Rosetta is back, and Apple’s efforts with developers to get their apps up and running on Arm are in full swing. The PPC to x86 transition created the blueprint for Apple for an ISA change, and after this successful transition they will be doing it again in the next few years, with Apple becoming their own chip supplier.

An in-depth analysis of the microarchitecture and benchmarks

On the next page, we’ll take a look at the Firestorm cores from the A14 that will also be used in the M1, and also do some in-depth benchmarking on the iPhone chip, paving the way for at least what to expect from the M1:

[ad_2]

Source link