A New Way to Solve “Toughest” IT Problems



[ad_1]

A relatively new type of computing that mimics the workings of the human brain was already transforming the way scientists could tackle some of the most difficult information processing problems.

Now, researchers have found a way to run what’s known as the reservoir calculation between 33 and a million times faster, with far less computer resources and less data entry required.

In fact, in a test of this next-generation reservoir computation, researchers solved a complex computer problem in less than a second on a desktop computer.

Daniel GauthierUsing the advanced technology now available, the same problem requires a supercomputer to solve and still takes significantly longer, said Daniel Gauthier, lead author of the study and professor of physics at Ohio State University.

“We can perform very complex information processing tasks in a fraction of the time using much less computing resources compared to what reservoir computing can do today,” said Gauthier.

“And the tank calculation was already a significant improvement over what was previously possible.”

The study was published today (September 21, 2021) in the journal Nature Communication.

Reservoir Computing is a machine learning algorithm developed in the early 2000s and used to solve “the most difficult” computing problems, such as predicting the evolution of dynamic systems that change over time, a declared Gauthier.

Dynamic systems, like the weather, are hard to predict because a single small change in a condition can have massive effects down the line, he said.

A famous example is the “butterfly effect”, in which – in a metaphorical example – the changes created by a butterfly flapping its wings can eventually influence the weather weeks later.

Previous research has shown that calculating reservoirs is well suited for learning dynamic systems and can provide accurate predictions of how they will behave in the future, Gauthier said.

It does this through the use of an artificial neural network, much like a human brain. Scientists feed data from a dynamic network into a “reservoir” of artificial neurons randomly connected in a network. The network produces useful results that scientists can interpret and feed back into the network, creating an increasingly accurate prediction of the future evolution of the system.

The larger and more complex the system and the more accurate scientists want the predictions to be, the larger the artificial neural network must be and the more computing resources and time it takes to accomplish the task.

One of the problems is that the reservoir of artificial neurons is a “black box,” Gauthier said, and scientists don’t know exactly what’s going on inside – they just know it works.

The artificial neural networks at the heart of reservoir computation rely on mathematics, Gauthier explained.

“We asked mathematicians to look at these networks and ask: How necessary are all of these pieces of machinery really? ” “, did he declare.

In this study, Gauthier and his colleagues looked into this question and found that the entire reservoir IT system could be greatly simplified, drastically reducing the need for IT resources and saving considerable time.

They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz, whose work led to our understanding of the butterfly effect.

Their next-generation reservoir calculation was clearly a winner over the current state of the art on this Lorenz forecasting task. In a relatively simple simulation performed on a desktop computer, the new system was 33 to 163 times faster than the current model.

But when the goal was to achieve high accuracy in forecasting, the calculation of the new generation tank was about 1 million times faster. And next-generation computing has achieved the same precision with the equivalent of just 28 neurons, compared to the 4,000 needed for the current-generation model, Gauthier said.

An important reason for the acceleration is that the “brain” behind this next generation of tank computing needs much less warm-up and training compared to the current generation to produce the same results.

Warm-up is training data that must be inputted into the tank computer to prepare it for its actual task.

“For our next-generation reservoir calculation, there is almost no warm-up time required,” said Gauthier.

“Right now, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And this is all the data that is lost, which is not needed for the actual work. We just have to insert one, two or three data points, ”he said.

And once the researchers are ready to train the reservoir computer to do the prediction, again, much less data is needed in the next-gen system.

In their test of Lorenz’s prediction task, the researchers were able to achieve the same results using 400 data points as the current generation produced using 5,000 or more data points, depending on the precision desired.

“What’s exciting is that this next generation of reservoir computation takes what was already very good and makes it considerably more efficient,” said Gauthier.

He and his colleagues plan to expand this work to tackle even more difficult computing problems, such as predicting fluid dynamics.

“It’s an incredibly difficult problem to solve. We want to see if we can speed up the process of solving this problem using our simplified reservoir calculation model. “

The co-authors of the study were Erik Bollt, professor of electrical and computer engineering at Clarkson University; Aaron Griffith, who received his doctorate in physics from Ohio State; and Wendson Barbosa, postdoctoral researcher in physics at Ohio State.

The work was supported by the US Air Force, the Army Research Office, and the Defense Advanced Research Projects Agency.

'); ppLoadLater.placeholderFBSDK = ppLoadLater.placeholderFBSDK.join (" n");


[ad_2]

Source link