Why are NVIDIA's growth days possibly over? – The Motley Fool



[ad_1]

NVIDIA (NASDAQ: NVDA) has become one of the most fashionable companies in the technology sector. Its graphics processing units (GPUs) have made computing more efficient, boosting its stock by more than 500% over the last three years.

I have long been a fan of NVIDIA. I visited the company headquarters in 2015 and I was fascinated by the extensive learning. I have been a shareholder for years and I have earned a lot of money.

But NVIDIA has been criticized in recent months and analysts are divided on where its growth is coming from. The company has already dominated mainstream games. It now sees data centers as a key market for expanding sales. Bulls will highlight 58% growth in NVIDIA data center revenue as a sign of solid progress.

I remain much more skeptical. I agree that data centers really matter, especially those of the largest cloud providers such as Amazon.com (NASDAQ: AMZN), with his company Amazon Web Services (AWS), or Alphabet (NASDAQ: GOOG) (NASDAQ: GOOGL) with the Google Cloud platform.

But it's becoming clearer to me that GPUs are do not the right solution for data centers. I think the future of NVIDIA will be very different from its past and this is not good news for investors.

An image of a data center of a company

Is there still room for NVIDIA GPUs in datacenters like this? Source of the image: Getty Images.

Looking towards the clouds

To stage the huge data centers of the big cloud providers, it's really the holy grail of hardware vendors. There are dozens of locations around the world, each managing large amounts of data to analyze. A win here for NVIDIA could generate significant future sales of GPUs.

But Cloud Titans have unique needs for their workflow. And often these needs do not really match the benefits of GPUs.

For example, Amazon designs its own chips for its Echo devices to be more responsive to voice signals. For this application, the latency (the speed that Alexa needs to understand and answer) is very important. Google also designs its own chips to reduce the power consumption of its data center. Here, the number of operations performed per watt of energy consumed is very important. None of these applications require image recognition or video recognition.

The underlying code of software applications is also constantly evolving and the machine learning algorithms are continually being recycled. Alexa could originally learn English according to Oxford English Dictionarybut eventually formed to recognize that "killing this interview" is a good thing and not an act of homicide.

Now, multiply the specific needs of each software and the ever-changing code by hundreds of thousands of other customers, each praising storage, processing, and "machine learning as a service" from cloud providers such as Amazon Web Services and the Google Cloud platform. Things are starting to become very complex.

To his credit, NVIDIA has done its best to find solutions to the problems of its customers. He used optimizers to match the logic of what his customers are trying to achieve to his GPU hardware products that would be most likely to achieve them. TensorRT is an example of how to configure NVIDIA GPUs to run certain datacenter applications.

But the fit is never perfect.

Companies have had to accept that their software adapts to some extent to the available hardware models of NVIDIA and the associated features. End users did not know which architecture was being extracted behind the scenes by NVIDIA optimizers. They just knew that GPUs were significantly better to deal with their needs as the processors were. That's exactly why we've seen the tremendous growth of GPUs in recent years.

This is where NVIDIA's problem lies. GPUs are not the magic way to handle all the complexity of the data center. There are inefficiencies created with each forced adjustment of an application to a GPU. And things are becoming more complex every day.

Large companies with deep pockets already design their own application-specific integrated circuits to optimize individual tasks. But even for those who do not have billions of dollars and dedicated research teams, a solution begins to appear.

And if chips could be programmed, then reprogrammed, to always perfectly match everything you want your software to do?

Artistic representation of an integrated circuit.

Image Source: Getty Images

Enter the FPGA

These chips exist and are called FPGA (Field Programmable Gate Arrays). The logic of on-site programmable chips can constantly change, which means that they are adaptable to changing software requirements.

This distinguishes them from other chips based on instructions such as processors or GPUs, but the distinction has not really counted in the past. Traditionally, computing was only done using processors, whose performance doubled anyway every 18 months. And GPUs then found a more efficient way to improve processors.

But this point of differentiation really counts today. Artificial intelligence (AI) is an unstable beast, and its constant drive to artificial intelligence algorithms makes it difficult for CPUs and GPUs to track. And just like for Alexa, latency becomes more and more important for all applications requiring a response time of a fraction of a second. Inefficiencies become less tolerable.

The programmable appearance of FPGAs could completely eliminate these inefficiencies. Summarizing the layer between software models and hardware – a "model optimizer library" for technicians – an ecosystem using FPGAs could theoretically run all applications. Perfectly. The FPGA as a service could look like CUDA on steroids: it was developed to adapt the hardware to specific algorithms, rather than using the nearest NVIDIA GPU. And FPGAs can be optimized and then re-optimized over time, which is convenient when code and logic change.

FPGAs will not be the solution for everyone. Their initial cost is higher than that of processors and GPUs. And they take a lot of time to program, which needs to be done by very experienced engineers.

But these factors are not as worrying for cloud vendors. They have access to IT talent and can offer the high upfront cost. The benefit they derive is the reduction of the overall energy costs of their data centers, which they can transfer in the form of more competitive rates to their customers renting the processing and storage.

That's the cheap, and the time has come, for FPGAs. And that's exactly why the biggest cloud service providers, including AWS, Microsoft, Ali Baba, and Baidu, deploy them at a fast pace around the world. I think NVIDIA's growth rates in data centers will slow down and its best days may be late.

John Mackey, CEO of Whole Foods Market, an affiliate of Amazon, is a board member of The Motley Fool. Suzanne Frey, a member of Alphabet's executive board, is a director of The Motley Fool. Teresa Kersten, a LinkedIn employee, a subsidiary of Microsoft, is a board member of The Motley Fool. Simon Erickson owns shares in Amazon and Nvidia. Motley Fool owns shares and recommends Alphabet (A shares), Alphabet (C shares), Amazon, Baidu and Nvidia. The Motley Fool owns shares of Microsoft. Motley Fool has a disclosure policy.

[ad_2]

Source link