New Research Says Entire Universe Could Be One Giant Neural Network



[ad_1]

The central idea is deceptively simple: every phenomenon observable in the entire universe can be modeled by a neural network. And that means, by extension, that the universe itself can be a neural network.

Vitaly Vanchurin, professor of physics at the University of Minnesota Duluth, posted an incredible article last August called “The World as a Neural Network” on the arXiv preprint server. He has managed to move beyond our opinion until today, when Futurism’s Victor Tangermann posted an interview with Vanchurin discussing the paper.

The big idea

According to the paper:

We are discussing the possibility that the entire universe, at its most basic level, is a neural network. We identify two different types of dynamic degrees of freedom: “trainable” variables (eg: bias vector or weight matrix) and “hidden” variables (eg: state vector of neurons).

In its most basic form, Vanchurin’s work here attempts to explain the gap between quantum physics and classical physics. We know that quantum physics does a great job of explaining what happens in the universe at very small scales. When we deal with, for example, individual photons, we can familiarize ourselves with quantum mechanics on an observable, repeatable and measurable scale.

But when we start to move, we are forced to use classical physics to describe what is going on because we kind of lose the common thread when we make the transition from observable quantum phenomena to classical observations.

The argument

The fundamental problem with crafting a theory of everything – in this case, one that defines the very nature of the universe itself – is that it usually ends up replacing one proxy-for-god with another. Where theorists have posed everything from the divine creator to the idea that we all live in a computer simulation, the two most enduring explanations of our universe are based on separate interpretations of quantum mechanics. They are called the interpretations of “many worlds” and “hidden variables” and they are those that Vanchurin tries to reconcile with his theory of “the world as a neural network”.

To this end, Vanchurin concludes:

In this article, we have discussed the possibility that the entire universe, at its most basic level, is a neural network. This is a very bold statement. We’re not just saying that artificial neural networks can be useful for analyzing physical systems or discovering physical laws, we’re saying this is how the world around us actually works. In this regard, it could be seen as a proposition for the theory of everything, and as such, it should be easy to prove it. You just have to find a physical phenomenon that cannot be described by neural networks. Unfortunately (or fortunately) it’s easier said than done.

Quick setting: Vanchurin specifically says that he adds nothing to the interpretation of “many worlds”, but this is where the most interesting philosophical implications lie (in this author’s humble opinion).

If Vanchurin’s work takes place in peer review, or at least leads to a greater scientific fixation on the idea of ​​the universe as a fully functioning neural network, then we will have found a thread to draw which could put us on the path to a successful theory of everything.

If we are all nodes in a neural network, what is the purpose of the network? Is the universe a giant closed network or is it a single layer in a larger network? Or maybe we are just one of the billions of other universes connected to the same network. When we train our neural networks, we run thousands or millions of cycles until the AI ​​is properly “trained”. Are we just one of the countless training cycles for the greater purpose of a greater than universal machine?

You can read the full article here on arXiv.

Published March 2, 2021 – 19:18 UTC



[ad_2]

Source link