“We are stunned” – questioning Apple about its Mac silicon revolution



[ad_1]

The graphic representing the Apple M1 chip, as presented by Apple at an event earlier this month.

The graphic depicting the Apple M1 chip, as presented by Apple at an event earlier this month.

Some time ago, in a building on the Apple campus, a group of engineers met. Isolated from other members of the company, they took the guts of old MacBook Air laptops and connected them to their own prototype boards with the goal of building the very first machines that would run macOS on Apple’s ARM silicon, custom designed.

To hear Apple’s Craig Federighi tell the story, it sounds a bit like a callback of Steve Wozniak in a Silicon Valley garage so many years ago. And this week, Apple finally took the big step these engineers were preparing for: the company released the first Macs running on Apple Silicon, initiating a transition of the Mac product line away from Intel processors, which are the industry standard. industry for desktops and laptops for decades.

In a conversation shortly after M1’s announcement with Craig Federighi, Apple’s Senior Vice President of Software Engineering, Global Marketing Director Greg Joswiak, and Senior Vice President of Hardware Technologies Johny Srouji, we learned that Unsurprisingly, Apple had been planning this change for many, many years.

Ars spoke at length with these leaders about the architecture of the first Apple Silicon chip for Mac (the Apple M1). While we had to educate ourselves on the extreme cases of software support, there really was a big question in mind: What are the reasons for Apple’s drastic change?

Why? And why now?

We started with this big idea: “Why? And why now?” We got a very Apple response from Federighi:

The Mac is the soul of Apple. I mean, the Mac is what brought a lot of us into computing. And it was the Mac that brought many of us to Apple. And the Mac is still the tool we all use to do our jobs, to do everything we do here at Apple. And so having the opportunity … to apply everything we’ve learned to the systems that are central to how we live our lives is obviously a long-term ambition and kind of a dream come true.

“We want to create the best possible products,” added Srouji. “We really needed our own custom silicon to really deliver the best Macs we can offer.

Apple started using Intel x86 processors in 2006 after it seemed clear that PowerPC (the old architecture of Mac processors) was reaching the end of the road. During the early years, these Intel chips were a huge boon to the Mac: they allowed interoperability with Windows and other platforms, making the Mac a much more flexible computer. They have allowed Apple to focus more on the increasingly popular laptops in addition to desktops. They also made the Mac more popular overall, alongside the booming success of the iPod, and soon after, the iPhone.

And for a long time, Intel’s performance has been second to none. But in recent years, the Intel processor track record has been less reliable, both in terms of performance gains and consistency. Mac users have noticed. But the three men we spoke with insisted that this was not the driving force for change.

“That’s about what we could do, right?” Joswiak said. “Not on what someone else could or couldn’t do.”

“Every business has an agenda,” he continued. “The software company wants the hardware companies to do it. The hardware companies want the operating systems company to do it, but they have competing programs. And that’s not the case here. We don’t. had only one program. “

When the decision was finally made, the circle of people who knew about it was initially quite small. “But those people who knew were walking around smiling from the moment we said we were going down that road,” Federighi recalls.

Srouji described Apple as being in a privileged position to be successful: “As you know, we don’t design chips as merchants, as vendors, or as generic solutions, which allows for very tight integration with the software and system. and the product – exactly what we need. “

Our virtual session included: Greg
Enlarge / Our virtual session included: Greg “Joz” Joswiak (Senior Vice President, Global Marketing), Craig Federighi (Senior Vice President, Software Engineering) and Johny Srouji (Senior Vice President, Hardware Technologies)

Aurich Lawson / Apple

M1 design

What Apple needed was a chip that learned from years of perfecting mobile systems-on-a-chip for iPhone, iPad and other products, then added all kinds of additional features to meet the expanded needs of a laptop. or desktop computer.

“During the pre-silicon, when we even designed the architecture or defined the functionality,” recalls Srouji, “Craig and I are sitting in the same room and we’re like, ‘OK, here’s what we want to design. Here are the things that matter. ‘”

When Apple first announced plans to launch the first Apple Silicon Mac this year, onlookers speculated that the iPad Pro’s A12X or A12Z chips were a model and the new Mac chip would be something. like an A14X – a beefed up variant of the chips that shipped in the iPhone 12 this year.

do not exactly so, says Federighi:

The M1 is essentially a superset, if you want to think of it against A14. Because when we set out to build a Mac chip there were a lot of differences from what we would otherwise have had in a corresponding A14X, for example, or something like that.

We have done extensive analysis of Mac application workloads, types of graphics / GPU capabilities needed to run a typical Mac workload, types of texture formats required, support for different types of GPU compute and things available on the Mac… even the number of cores, the ability to drive Mac-sized displays, support for virtualization and Thunderbolt.

There are many, many features that we designed in M1 that were Mac requirements, but these are all superset capabilities over what an app compiled for the iPhone might expect.

Srouji developed on this point:

The foundation for many of the PIs that we built that became the foundation for M1 to use… started over ten years ago. As you may know, we started with our own processor, then graphics, ISP, and neural engine.

So we’ve been building these great technologies for about ten years, and then several years ago we thought, “Now is the time to use what we call scalable architecture.” Because we had the basis of these excellent IP addresses and the architecture is scalable with UMA.

Then we said, “Now is the time to make a custom chip for the Mac,” which is M1. It’s not like an iPhone chip that uses steroids. It’s a whole different custom chip, but we use the basis of many of these great IPs.

Unified memory architecture

UMA stands for “unified memory architecture”. When potential users look at the M1 benchmarks and wonder how it is possible that a low-power, mobile-derived chip is capable of this kind of performance, Apple sees UMA as a key ingredient to that success.

Federighi asserted that “modern compute or graphics rendering pipelines” have evolved into a “hybrid” of GPU computing, GPU rendering, image signal processing, and so on.

UMA basically means that all components – a central processor (CPU), graphics processor (GPU), neural processor (NPU), image signal processor (ISP), etc. – share a very fast memory pool, positioned very close to all. This is contrary to a common desktop paradigm, for example, dedicating one pool of memory to the CPU and another to the GPU on the other side of the board.

A slide used by Apple to showcase the M1's unified memory architecture at an event this year.

A slide used by Apple to showcase the M1’s unified memory architecture at an event this year.

Samuel Axon

When users run demanding and multifaceted applications, traditional pipelines can end up wasting a lot of time and efficiency moving or copying data so that it is accessible by all of these different processors. Federighi suggested that Apple’s success with the M1 is in part due to rejecting this inefficient hardware and software paradigm:

Not only did we get the great benefit of raw performance from our GPU, but just as important was the fact that with the unified memory architecture, we weren’t constantly moving data back and forth and changing formats that slowed them down. And we got a huge increase in performance.

And so I think the workloads in the past where it is, find the triangles you want to draw, send them to the discrete GPU and let it do its job and never look back – it’s not. what a modern computer rendering pipeline looks like today. These things come and go between many different threads to accomplish these effects.

This is not the only optimization. For a few years now, Apple’s Metal Graphics API has been using “tile-based deferred rendering,” which the M1’s GPU is designed to take full advantage of. Federighi explained:

Where old-fashioned GPUs would basically run on the whole frame at once, we’re operating on tiles that we can move around extremely fast on-chip memory and then do a huge sequence of operations with all the different units. execution on this tile. It is incredibly bandwidth efficient unlike those discrete GPUs. And then you combine that with the massive width of our pipeline to RAM and other chip efficiencies, and it’s a better architecture.

[ad_2]

Source link