[ad_1]
SAN FRANCISCO – The largest computer chips usually fit in the palm of your hand. Some might rest on the tip of your finger. Conventional wisdom says that something bigger is a problem.
Cerebras, a young company from Silicon Valley, disputes this idea. On Monday, the company unveiled what it claims to be the biggest computer chip ever built. As big as a plate, about 100 times larger than a typical chip, it would barely fit on your lap.
The engineers at the base of the chip believe that it can be used in giant data centers and help accelerate the progress of artificial intelligence in all areas, from autonomous cars to digital assistants like Alexa, Amazon.
Many companies are building new chips for A.I., including traditional chip makers such as Intel and Qualcomm, as well as other start-ups in the United States, Britain, and China.
Some experts believe that these chips will play a key role in the race for the creation of an artificial intelligence, potentially changing the balance of power between technology companies and even countries. They could fuel the creation of commercial products and government technologies, including surveillance systems and autonomous weapons.
Google has already built such a chip and uses it in a wide range of AIs. projects, including Google Assistant, which recognizes voice commands on Android phones, and Google Translate, which translates one language into another.
"There is monstrous growth in this area," said Andrew Feldman, managing director and founder of Cerebras, a veteran of the chip industry who had previously sold a business to US chip giant, AMD.
New A.I. systems rely on neural networks. Freely based on the neural network of the human brain, these complex mathematical systems can learn tasks by analyzing large amounts of data. By pinpointing the patterns of thousands of cat photos, for example, a neural network can learn to recognize a cat.
This requires a particular type of computing power. Today, most companies analyze data with the help of graphic processing units, or G.P.U.s. These chips were originally designed to render game images and other software, but they are also effective for performing the calculations that power a neural network.
About six years ago, while tech giants such as Google, Facebook and Microsoft were duplicating artificial intelligence, they began buying huge amounts of GPUs from chip maker Nvidia from Silicon Valley. In the year before the summer of 2016, Nvidia sold 143 million US dollars at the price of G.P.U.s. That was more than double the year before.
But companies wanted even more processing power. Google has built a chip specifically for neural networks – the tensor processing unit, or T.P.U. – and several other chip makers have pursued the same goal.
A.I. systems work with many chips working together. The problem is that moving chunks of data between chips can be slow and can limit the speed with which chips parse this information.
"The connection of all these chips slows them down and consumes a lot of energy," said Subramanian Iyer, a professor at the University of California at Los Angeles, specializing in designing chips for artificial intelligence.
Hardware manufacturers are exploring many different options. Some try to widen the pipes between the chips. Cerebras, a three-year-old company with more than $ 200 million, has taken an innovative approach. The idea is to keep all the data on a giant chip so that a system can run faster.
It is very difficult to work with one big chip. Computer chips are usually built on round silicon wafers about 12 inches in diameter. Each slice usually contains about 100 chips.
Many of these chips, once removed from the wafer, are discarded and never used. Circuit etching in silicon is such a complex process that manufacturers can not eliminate defects. Some circuits just do not work. This is one of the reasons why chip makers keep their chips small – no less mistakes, so they should not throw away as many.
Cerebras said he built a chip the size of a slice.
Others have tried, including a start-up called Trilogy, created in 1980 by the famous IBM chip engineer, Gene Amdahl. Although enjoying funding of over $ 230 million, Trilogy finally decided that the task was too difficult and that it ended after five years.
Nearly 35 years later, Cerebras plans to start shipping equipment to a small number of customers next month. Mr Feldman said the chip could drive A.I. systems between 100 and 1000 times faster than existing hardware.
He and his engineers have divided their giant chip into smaller sections, or kernels, knowing that some kernels will not work. The chip is designed to convey information around these defective areas.
Important questions weigh on the material of the company. Mr. Feldman's performance statements were not independently verified and he did not disclose the cost of the chip.
The price will depend on how effectively Cerebras and its manufacturing partner, TSMC, based in Taiwan, can build the chip.
The process requires "a lot more manpower," said Brad Paulsen, senior vice president of TSMC. A chip as big also consumes a lot of energy, which means that it will be difficult – and expensive to keep it cold. In other words, building the chip is only part of the task.
"It's a challenge for us," said Paulsen. "And it's a challenge for them."
Cerebras plans to sell the chip as part of a much larger machine featuring sophisticated equipment to cool silicon with a refrigerated liquid. This is nothing like what big tech companies and government agencies are used to working with.
"It's not that people have not been able to build this type of chip," said Rakesh Kumar, a professor at the University of Illinois, who also explores "The problem is that they have not been able to build one that is commercially feasible."
[ad_2]
Source link