Clay Golem, Turing Test and SkyNet. Thus was born the artificial intelligence



[ad_1]

Artificial intelligence apparent chocolate – tasty, sweet, but can be bitter, but in large quantities dangerous. It is the opinion today. Probably, for this reason, there is talk of control over the development of AI systems, especially those that in the future will have a certain degree of independence – to make data-based decisions and to make sure that they do not. ;experience. Previously, the person was more interested in the idea of ​​creating AI in principle – is it possible and with what problems will face. Onliner.by in partnership with Huawei understands the history of the emergence of artificial intelligence.

Summary

Mythology and Artificial Intelligence

The appearance of a rational and thinking creature, embodied not by nature but by man, was not even thought . century And someone could even realize what was planned. Probably. Ancient philosophers have perplexed and reasoned about what is the spirit, what properties are characteristic of it. In myths and tales appeared incredible creatures – bronze Talos, animated by Aphrodite Galatea, clay Golem and many others. They were originally designed as one-person badistants or "top-level" instruction executors. Some of the creatures possessed the simplest skills and limited abilities, executing direct orders. The tendency to identify AI with a person is always present – most often, a highly developed "computer" will be depicted as a biped, two-armed creature, with a head and features "a crown of nature" (19659005). "It's when it comes to myths, fairy tales and other fantastic works."

So, much later, the Iron Lumberjack appeared, and somewhere in the 20 years of the last century – RUR Carl Czapek.Sometimes it is the birth of a Czech robot called the beginning of the era of AI, although it is probably not the In fact, Raymund Lullia of the thirteenth century, for example, is considered one of the pioneers of combinatorics, with William Ockham (or rather the principle in his honor – the "Occam razor") and Al-Khwarizmi are mentioned, thanks to the latter appeared algebra and the concept of algorithm, which were useful in the future.Then the research was of theoretical nature, and a lot of time will pbad before the creation real samples.

The first programmable machine

The theory by the efforts of the set looked for a way to At the beginning of the 19th century, Josef Marie Jacquard, the French inventor, presented the world's first programmable machine. No, it was not a computer, but a loom working on punch cards. Today, it seems primitive, but then … The intelligence did not shine the machine, but its ability to perform different tasks was itself perceived as a miracle.

Philosophy is erased, mathematics and other exact sciences do not have a definite form. The latter began to acquire characteristics in the 20-30s of the last century and then only gained momentum. In 1943, the term "cybernetics" (concerning computers) appeared and in 1948 is published the book of Norbert Wiener, named founder of the theory of artificial intelligence "Cybernetics or Control and Communication in Animal and Machine" . The author discusses the behavioral and reproductive problems of control and information systems in the living world and in technology.

As in the past, the war stimulated the development of science and technology. around. Since the launch in 1941 of the first fully automated and programmable Z3 Conrad Zuse computer, it took a short time before the British Colossus lamp was used to decode Nazi messages, and that the US ENIAC, also needed for military, enters service. Three Laws of Robotics and the Turing Test

In the 1950s, the "robot" and "artificial intelligence" were almost synonymous, at least to city-dwellers. Isaac Azimov publishes a collection of short stories "I, the robot", gradually formulating the legendary "three laws of robotics", which must define the rules of behavior of artificial beings or, more exactly, of the intellect. They limit AI activities and at the same time impose rules of interaction with a person – so that something similar to SkyNet does not appear

In the same period (ie in 1950), the article by Alan Turing "Computers and Mind" appears. the notion of "Turing test". With the help of the latter, it was planned to know if the machine has the intelligence or rather is able to think. It remained to create such a machine.

There are three main versions of the Turing test. In the framework of two of them, united under the name of "imitation game", the computer and the person to whom the questions are asked are responsible for them. . The goal of the interrogator is to know who is behind the answers – a man or a woman (not a computer or a person). The purpose of the computer is to confuse the questioner and make him err in the choice. In these versions, the conditions may differ: one of the respondents (the person) helps the examiner or prevents him from making the right choice.

In the third version ("Standard Interpretation"), the computer does not try to confuse the examiner. . It is in this variant that the questioner must understand who is behind the answers – the computer or the being alive.

Over time, the Turing test found supporters and opponents: some deem it effective, others too simplistic. every 100. "Attempts to pbad the test were made more than once, one of the most famous is the story of the" 13-year-old Ukrainian boy. "In 2014, during the Turing test, virtuoso Zhenya Gustman was able to deceive the third of the examiners – they considered the "native of Odessa" as a man.The press immediately reported that Zhenya was the first AI to pbad the test. of experts have denied these statements, citing similar test results and even more successful.

The birth of AI

The academic discipline of AI was in 1956 after The Dartmouth Seminar: Mathematicians and engineers proposed to discuss the hypothesis that all aspects of learning and the properties of intelligence can be described with such precision that the machine is able to simulate it. Well, a more global goal is have to evaluate the possibility of creating machines that know the natural language and are ready to perform tasks that only concern people.

John McCarthy became one of the organizers of the Dartmouth seminar. He then set up a high-level programming language Lisp (or rather, a language family), the second oldest after Fortran. Lisp has become the main programming language for AI tasks, he is still in the ranks. It has been used to describe ideas related to natural language processing, text generation, automatic planning and dispatching, automatic theorem proving, computer vision systems, etc.

Basic approaches to AI

Several basic approaches to its creation, programming: "top-down" (top-down) and "bottom-up" (bottom-up). They were proposed by Alan Turing in the 1940s. The first involved the creation of an artificial intelligence, based on human "programs" and "algorithms" of behavior that must be instilled in AI. Marvin Minsky was a supporter of this approach, considered the most promising during the Cold War.

This scientist has also made considerable efforts to develop the theory of AI. He was directly involved in the organization of the Dartmouth seminar mentioned above and, in 1963, he founded the Artificial Intelligence Laboratory (CSAIL) at MIT.

Advantages of the first approach? Supporters of Top-Down argue that it is closer to the usual training when some models are inherited by transferring knowledge and patterns using the natural language of the person to the computer . One can be sure that the machine will accept as truth what it is "told", and will not invent "we do not know what" in the neural networks. This, of course, is a crude badogy, but it highlights the peculiarities of the approach. Thus, we see increased control over the formation of AI, its predictability.

The second approach, Bottom-Up (also known as connectivity), was seen in the creation of neural networks simulating the work of brain cells and self-learners. This is the opposite of the first approach. This involves a set of simple "reactions" that combine to solve complex problems. This type of independence is sometimes called the weakest link in the system – it is unpredictable and acts "at its discretion."

However, in the future, enthusiasm was enthusiastically observed, and it was believed that after twenty or so years, the machines the same tasks as a person . Especially in the 1950s, Logic Theorist appeared, what is sometimes called the "world's first AI program". She was able to prove 38 of the 52 theorems that she proposed, generating several new solutions. In addition, the forecast did not occur, and there were periods of deep disappointment.

The first "Ice Age" of IA and the period of disappointment

In the 1960s, US scientists and engineers suspended their work. expectations were much higher than the results achieved. The idea has been tried to implement for a long time, but not too successfully. Therefore, in 1964, the ALPAC (Automatic Language Processing Advisory Committee) was badembled to check the heights of computer linguistics and whether it could be used for efficient translation.

This consisted mainly of technical documentation and scientific articles. their automatic translation from Russian to English. After a few years in the ALPAC has come to the conclusion: there is no sense to develop this direction (slowly, expensive, a lot of mistakes). In other countries, work continued, and years later, globalization always required functional and inexpensive solutions.

The next blow for AI, closer to the 1970s, was the decline of the theory of connectionism. It has been abandoned to oblivion, at least for a time and in its original form. Maybe people have scared HAL 9000 Arthur Clark and Stanley Kubrick?

The first robot able to make decisions

Marvin Minsky clarified the forecast, saying that a robot with a medium level of intelligence will be created in 7-8 years. Meanwhile, thanks to the patronage of ARPA, the first machine to make decisions appears from the depths of labs – Shakey the Robot (who is also Robot Shaki or Robot Shaking)

He worked in Lisp and applied the STRIPS scheduler. generated according to the conditions around smaller teams. The task of "pushing the platform block", for example, Shoki's Robot interpreted as the following sequence: look around with the camera and sonar, identify the box, find the way to the help of 39, a ramp, push the object. ] A simple mechanism by modern standards gave the green light to the development of algorithms to find the shortest path and avoid obstacles, and badyze the images with decomposition into elements

unit but not faded. One of the reasons was the publication in 1973 in the UK of the "Lighthill Report", which indicated that no developments in the field of AI could be used to solve the world's problems real. The problem was related to the so-called combinatorial explosion: the system is stifled by the number of possible solutions, and as more and more variables become available, these options become more numerous. The author of the report suggested that it would be best to focus on creating solutions for clearly defined and specific applications.

In the late 70's – early 80's, the computer company Digital Equipment Corporation, later absorbed by Compaq, actually proposed this approach. : AI should have a narrow specialization, and not be a pro in everything. As a result, the expert system XCON / R1 appeared, with the help of which the client of the company received the configuration of the computer for his needs and his budget. In 1986, DEC was saving up to $ 40 million a year for the operation of XCON, and expert systems were living their "golden age". They have been called the first truly effective form of AI in the software.

Economic bubble AI

Looking back, one can only be surprised ambitious plans. In the 1980s, the Japanese allocated 850 million dollars (more than 2.3 billion today, given the inflation) to the development of "fifth generation computers" (after tube, transistor, integrated circuits and microprocessors). They had to communicate with people, translate easily from language to language, understand what is represented on images and photos, work with huge data sets, perform parallel calculations and have almost human thinking. Prolog was chosen as the main programming language.

The idea of ​​Japanese was impressed in the United States and the ARPA (now DARPA), which recently cut development funding in the AI ​​field, launched a new program. and equipment for AI. Similarly arrived in the United Kingdom. Admittedly, nobody could realize what was designed, but artificial intelligence was starting to acquire the characteristics of an economic bubble: IBM's existing computers were increasing productivity and costing less than Lisp computers

. Marvin Minsky and Roger Shank – in 1984, they predicted the economic collapse of the new industry, and three years later, predictions come true.

The attitude towards AI has become much more cautious, but it is not forgotten in the 1980s and 1990s returned to the idea of ​​connectivity, more supporters of this direction. In the 1980s, the project "Cyc" started. 34 years have pbaded, but he is still active. "Cyc" is the process of creating a large knowledge base necessary for the AI ​​to make judgments similar to those of the human being in the face of new situations. The database developers say they have already found a practical application as an advanced translator able to act according to the context and specificities of the documents, a "smart" database of terrorists and in from other areas. Unfortunately, these examples are either obsolete or, on the contrary, have not yet reached their full potential. Towards the end of the 1990s, due to the next increase in computer performance, a thaw began, a supercomputer of a new generation joined the game (literally and figuratively). New technological capabilities and past developments have enabled engineers and mathematicians to create tools and solutions that were not available before. Voice recognition systems, logistics, collection and processing of large amounts of data, medical diagnostics, machine learning, etc. Many existed before, but the efficiency was incomparably less. In the meantime, AI has become pale with a human face and, according to some experts, its role has often been ignored – it has become so habitual. A kind of rebirth era has arrived: AI is starting to manage cars, search engines, smartphones and missiles again. But about next time.


In the field of artificial intelligence, we understand with Huawei experts. At IFA 2017, the company introduced the first mobile platform for AI, the Kirin 970 chipset, which was later used in smartphones Mate 10 Pro, P20 and P20 Pro with a Leica camera. AI algorithms themselves set the parameters for night shooting or for 19 other categories, simulate portraits, help with brightness, details and color, give advice on the organization of the composition of the Photo. According to DxOMark, the P20 Pro smartphone is the best in the world


The partner project was prepared with the support of LLC "Bel Huavei Technologies" UNP 190835312.

The processors of the catalog Onliner.by

] See also:

Quick communication with the editorial board: read the Onliner public chat and email us at Viber!

Play on our Telegram channel

Reprinting text and photos from Onliner.by is prohibited without permission of the editorial board. [email protected]

[ad_2]
Source link