Intel is launching its next-generation neuromorphic processor, so what is it again?



[ad_1]

Mike Davies, director of Intel’s Neuromorphic Computing Lab, explains the company’s efforts in this area. And with the launch of a new neuromorphic chip this week, he told Ars about the updates.

Despite their name, neural networks are only distantly related to the kinds of things you would find in a brain. While their organization and the way they move data through processing layers may share rough similarities with real neural networks, the data and calculations performed on it would look very familiar to a standard processor.

But neural networks aren’t the only way people have tried to learn from the nervous system. There is a separate discipline called neuromorphic calculus which is based on the approximation of the behavior of individual neurons in hardware. In neuromorphic hardware, calculations are done by many small units that communicate with each other through bursts of activity called spikes and adjust their behavior based on the spikes they receive from others.

Intel released the latest version of its neuromorphic hardware, called Loihi, on Thursday. The new version comes with the kinds of things you’d expect from Intel: a better processor and some basic compute improvements. But it also comes with fundamental hardware changes that will allow it to run whole new classes of algorithms. And while Loihi remains a research-driven product for now, Intel is also releasing a compiler that it hopes will lead to wider adoption.

To make sense of Loihi and what’s new in this version, let’s go back and start by looking at neurobiology a bit, and then build from there.

From neurons to calculus

The basis of the nervous system is the cell type called a neuron. All neurons share some common functional characteristics. At one end of the cell is a structure called a dendrite, which you can think of as a receptor. This is where the neuron receives input from other cells. Nerve cells also have axons, which act as transmitters, connecting with other cells to transmit signals.

The signals take the form of what are called “spikes,” which are brief changes in voltage across the cell membrane of the neuron. The spikes travel along the axons until they reach the junctions with other cells (called synapses), in which case they are converted into a chemical signal that travels to the nearby dendrite. This chemical signal opens channels that allow ions to flow into the cell, triggering a new peak on the recipient cell.

The recipient cell incorporates a variety of information – how many peaks it has seen, whether neurons are signaling that it should be silent, how active it was in the past, etc. – and uses them to determine its own state of activity. Once a threshold is crossed, it will trigger a spike in its own axons and potentially trigger activity in other cells.

Typically, this results in sporadic, randomly spaced spikes in activity when the neuron is not receiving a lot of input. Once it starts receiving signals, however, it goes into an active state and fires a bunch of spikes in quick succession.

A neuron, with the dendrites (spiky protuberances at the top) and part of the axon (long extension at the bottom right) visible.
Enlarge / A neuron, with the dendrites (spiky protuberances at the top) and part of the axon (long extension at the bottom right) visible.

How does this process encode and manipulate information? This is an interesting and important question, which we are just beginning to answer.

One of the ways we answered this question was what has been called theoretical neurobiology (or computational neurobiology). This involved attempts to build mathematical models reflecting the behavior of nervous systems and neurons in the hope that this would allow us to identify some underlying principles. Neural networks, focusing on the organizational principles of the nervous system, have been one of the efforts emerging from this field. Spiked neural networks, which attempt to build on the behavior of individual neurons, are another.

Spiked neural networks can be implemented in software on traditional processors. But it is also possible to implement them via hardware, as Intel does with Loihi. The result is a very different processor than anything you are probably familiar with.

Silicone dosage

The previous generation Loihi chip contains 128 individual cores connected by a communications network. Each of these hearts has a large number of individual “neurons” or execution units. Each of these neurons can receive spike inputs from any other neuron – a neighbor in the same nucleus, a unit in a different nucleus on the same chip, or on a different chip entirely. The neuron integrates the peaks it receives over time and, depending on the behavior it is programmed with, uses them to determine when to send its own peaks to the neurons it is connected to.

All peak signaling occurs asynchronously. At set time intervals, the x86 cores integrated on the same chip force a synchronization. At this point, the neuron re-weighs its various connections – essentially, paying attention to all of the individual neurons that are sending signals to it.

In real neuron terms, part of the thread on the chip acts like a dendrite, processing incoming signals from the communication network in part based on the weight derived from past behavior. A mathematical formula was then used to determine when the activity had crossed a critical threshold and to trigger peaks itself when it did. The thread’s “axon” then looks for which other threads it is communicating with and spikes each one.

In the previous iteration of Loihi, a peak carried only one bit of information. A neuron only registered when it received one.

Unlike a normal processor, there is no external RAM. Instead, each neuron has a small memory cache dedicated to its use. This includes the weights it assigns to inputs from different neurons, a cache of recent activity, and a list of all other neurons to which the spikes are sent.

One of the other big differences between neuromorphic chips and traditional processors is energy efficiency, where neuromorphic chips come far ahead. IBM, which introduced its TrueNorth chip in 2014, was able to get useful work from it even though it was clocked at a quiet kiloHertz, and it used less than 0.0001 percent of the power that would be required to emulate a network. of spike neurons. on traditional processors. Mike Davies, director of Intel’s Neuromorphic Computing Lab, said Loihi can beat traditional processors by a factor of 2,000 on specific workloads. “We regularly find 100 times [less energy] for SLAM and other robotic workloads, ”he added.

[ad_2]

Source link