A family of computer scientists developed a plan for machine consciousness



[ad_1]

Renowned researchers Manuel Blum and Lenore Blum have devoted their entire lives to the study of computer science with a particular emphasis on consciousness. They wrote dozens of articles and taught for decades at prestigious Carnegie Mellon University. And, just recently, they published new research that could serve as a model for developing and demonstrating machine consciousness.

This paper, titled “A Theoretical Computing Perspective on Consciousness,” may just be a pre-print paper, but even if it crashes and burns on peer review (this almost certainly won’t), it will always hold an incredible distinction in the world of theoretical computing.

The Blums are joined by a third collaborator, an Avrim Blum, their son. According to Blum’s article:

All three Blums received their PhDs from MIT and spent a total of 65 wonderful years in the faculty of CMU’s Computer Science Department. Currently, the two eldest are Emeritus and the younger is Chief Academic Officer at TTI Chicago, a Ph.D. computer science research institute focusing on the fields of machine learning, algorithms, AI (robotics , natural language, speech and vision), Data Science and Computational Biology, and located on the University of Chicago campus.

This is their first joint article.

Hats off to the Blums, there cannot be too many theoretical computer families at the forefront of machine consciousness research. I’m curious what the family pet looks like.

Let’s get down to paper, okay? It’s fascinating and well-explained hardcore research that could very well change some perspective on machine consciousness.

By paper:

Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called Conscious AI. We define CTM in the spirit of Alan Turing’s simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain or cognition, but a simple model of (the admittedly complex concept of) consciousness.

In this context, a CTM seems to be any machine capable of demonstrating consciousness. The big idea here is not necessarily the development of a thinking robot, but more of a demonstration of basic concepts of consciousness in the hope that we will get a better understanding of our own.

It requires the reduction of consciousness to something that can be expressed in mathematical terms. But it’s a little more complicated than just measuring the waves. Here’s how the Blums say it:

An important major goal is to determine if the CTM can sense feelings and not just fake them. We specifically study feelings of pain and pleasure and suggest ways to generate those feelings. We argue that even a complete knowledge of the brain’s circuitry – including the neural correlates of consciousness – cannot explain what enables the brain to generate a conscious experience such as pain.

We offer an explanation that works just as well for robots with brains of silicon and gold as it does for animals with brains of flesh and blood. Our thesis is that in CTM, it is the architecture of the system, its basic processors; his expressive inner language which we call Brainish; and its dynamics (prediction, competition, feedback and learning); that make him aware.

Defining consciousness is only half the battle – and one that will likely not be won until after we ape it. The other side of the equation is the observation and measurement of consciousness. We can watch a puppy react to a stimulus. Even the plant consciousness can be observed. But for a machine to demonstrate consciousness, its observers must be certain that it is not simply imitating consciousness through clever mimicry.

Let’s not forget that GPT-3 can dazzle even the most cynical minds with its eerie ability to sound convincing, cohesive, and poignant (let’s also not forget that you have to press “generate new text” multiple times for it. get to do it because most of what it spits out is garbage)

The Blums get around this problem by devising a system meant only to demonstrate their consciousness. He won’t try to act human or convince you to think. It is not an artistic project. Instead, it works much like a digital hourglass where every grain of sand is information.

The machine sends and receives information in the form of “chunks” containing simple information. There may be multiple information segments competing for mental bandwidth, but only one information segment is processed at a time. And, perhaps more importantly, there is a delay in sending the next track. This allows the pieces to compete – with the strongest, the most important who often wins.

The winning pieces form the consciousness flow of the machine. This allows the machine to demonstrate adherence to a theory of time and to experience the mechanical equivalent of pain and pleasure. According to the researchers, the competing pieces would have more weight if the information they carried indicated that the machine was in extreme pain:

Less extreme pain and chronic pain doesn’t stop so many other lumps from reaching the stage that makes it “hard” for them to get there. In deterministic CTM, the difficulty for a track to enter STM is measured by how much greater the intensity of the track would have to be for it to enter STM. In the probabilistic CTM, the difficulty is measured by the intensity of the piece which should be greater to obtain a “suitably larger” time share in STM.

A machine programmed with such a flow of consciousness would effectively see most of its processing power (mental bandwidth) absorbed by extreme amounts of pain. This, in theory, could motivate him to fix himself or deal with whatever threatens him.

But before we get to that, we’ll really need to determine whether reverse-engineering the idea of ​​consciousness to the equivalent of high-stakes reinforcement learning is a viable proxy for being alive.

You can read the entire document here.

For more on robot brains, check out Neural’s upbeat speculation on machine sensitivity in our latest series here.

Published November 23, 2020 – 18:32 UTC



[ad_2]

Source link