The basics of modern AI – how does it work and will it destroy society this year?



[ad_1]

You do not have to be Keir Dullea to know that artificial intelligence can be daunting.
Enlarge / You do not have to be Keir Dullea to know that artificial intelligence can be daunting.

George Rinhart / Corbis via Getty Images

The artificial intelligence or artificial intelligence is huge right now. "Insoluble" problems are being resolved, billions of dollars are invested and Microsoft has even been hired to tell you how good their artificial intelligence is with spoken poetry. Yuck.

As with any new technology, it can be difficult to override the hype. I spent years researching robotics, UAVs and AI, but I struggled to keep up. In recent years, I have spent a lot of time learning to answer even some of the most basic questions, such as:

  • What are we talking about when we talk about AI?
  • What is the difference between AI, machine learning and deep learning?
  • What is the point of deep learning?
  • What types of once difficult problems can now be solved easily and what else?

I know I'm not the only one who asks me these things. So, if you're wondering what the excitement of artificial intelligence is at the most basic level, it's time to take a look behind the curtain. If you are an AI expert who reads NIPS articles for fun, there will not be much new for you here, but we all look forward to your clarifications and corrections in the comments.

What is AI?

There is an old computer joke that says: what is the difference between AI and automation? Well, automation is what we can do with computers, and AI is what we wish we could do. As soon as we discover how to do something, it stops being an AI and starts to be an automation.

This joke exists because even today, artificial intelligence is not well defined. Artificial intelligence is just not a technical term. If you were to search Wikipedia, the AI ​​is "intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and other animals." It's about as vague as you can get it.

Generally, there are two types of AI: strong AI and weak AI. The strong AI is what most people may think of when they hear artificial intelligence – an omniscient God-like intelligence, such as Skynet or Hal 9000, capable of general reasoning and reasoning. human intelligence while surpassing human capabilities.

Weak AIs are highly specialized algorithms designed to answer specific and useful questions in narrowly defined problem areas. A very good chess game program, for example, fits this category. The same goes for software that adjusts insurance premiums very precisely. These AI configurations are impressive in their own way but very limited overall.

Hollywood aside, today we have no idea of ​​a strong AI At present, any artificial intelligence is a weak AI, and most field researchers agree that the techniques that we have developed to create very large weak AIs will probably not lead us to a strong AI.

So, AI is currently more a marketing term than a technical one. The reason why companies tout their "AI" as opposed to "automation" is that they want to invoke the image of Hollywood AIs in the public mind. But … it's not completely false. If we are being courteous, companies may simply be tempted to say that while we are far from being a strong AI, today's weaker AIs are considerably more capable than those of a few years ago. .

Apart from marketing instincts, it's true. In fact, in some areas, machine capabilities have evolved considerably, largely because of the other two buzzwords that you hear a lot: machine learning and deep learning.

Photo taken from a short video published by Facebook engineers and demonstrating the real-time recognition of cat images by the IA (the Holy Grail of the Internet). "Src =" https://cdn.arstechnica.net/wp-content/uploads/2019/04/Screen-Shot-2019-04-08-at-6.22.53-PM-640x400.png "width =" 640 "height =" 400 "srcset =" https://cdn.arstechnica.net/wp-content/uploads/ 2019/04 / Screen-Shot-2019-04-08-at-6.22.53-PM-1280x800.png 2x
Enlarge / A photo of a short video published by Facebook engineers, which demonstrated real-time recognition of cat images by the AI ​​(aka Holy Grail for the Internet).

Machine learning

The machine learning is a special way of creating a machine intelligence. Let's say you wanted to launch a rocket and predict where it will go. This is, in the grand scheme of things, not so difficult: gravity is fairly well understood and you can write the equations and determine where it will go based on a few variables such as speed and starting position.

But it becomes difficult when you are looking for something where the rules are not so clear and known. Suppose you want a computer to look at pictures and want to know if any of them shows the picture of a cat. How do you write rules to describe what each possible combination of whiskers and cat ears looks like from every possible angle?

The automatic learning approach is now well known: instead of trying to write the rules, you are building a system that can define its own set of internalized rules after many examples. Instead of trying to describe cats, simply show your artificial intelligence many pictures of cats and let it determine what is a cat or not.

It's perfect for our current world. A system that learns its own rules from data can be improved with more data. And if there's one thing we've done really well as a species, it's to generate, store and manage a lot of data. Do you want to be better at recognizing cats? Internet generates millions of examples right now.

The ever-increasing tide of data partly explains why machine learning algorithms have exploded. The other part is about how to use the data.

With machine learning, besides the data, arise two other related questions:

  • How can I remember what I learned? On a computer, how can I store and represent the relationships and rules that I have extracted from the sample data?
  • How can I learn? How to modify the recorded representation in response to new examples and improve?

In other words, what is the thing that really makes the learning of all this data?

In machine learning, the computer representation of the learning you store is called the model. The type of model you use has enormous effects: it determines how your AI learns, what kind of data it can learn, and what kind of questions you can ask.

Let's look at a very simple example to understand what I mean. Suppose we were buying figs at the grocery store and we wanted to create an artificial intelligence machine that tells us when they are ripe. This should be easy enough, because with figs, it's basically the more sweet they are, the more sweet they are.

We could choose ripe and immature fruit samples, see how sweet they are, then put them on a chart and adjust them to a line. This line is our model.

Look at this! The line implicitly captures the idea of ​​"the softer it is, the softer it is" without us having to write it. Our baby AI knows nothing about sugar content or fruit ripening, but can predict how sweet a fruit is by squeezing it.

How do we shape our model to improve it? We can collect more samples and make another line adjustment for more accurate predictions (as we did in the second image above).

The problems become immediately obvious. We trained our fig AI on pretty grocery figs, but what if we dump them in a fig orchard? Suddenly, there is not only ripe fruit, there is also rotten fruit. They are very mellow, but they are certainly not good to eat.

What is it done? Well, it's a machine learning model, so we can provide new data, right?

As shown in the first image below, in this case, we would get a completely nonsensical result. A line is simply not a good way to capture what happens when fruits as well Wall. Our model no longer corresponds to the underlying structure of the data.

Instead, we need to make a change and use a better and more complex model. Maybe a parable or something similar is fine. This modification makes training more complicated because adjusting these curves requires more complicated calculations than fitting a line.

This is a pretty stupid example, but it shows how the type of model you choose determines the learning that you can perform. With figs, the data is simple, so your models can be simple. But if you try to learn something more complex, you need more complex models. Just as no amount of data would let the fine-tuned model capture the behavior of rotten fruit, there is no way to create a simple curve that matches a stack of images and obtain a visualization algorithm. computer.

The challenge of machine learning is therefore to create and choose the right models for the right problems. We need a sophisticated enough model to capture really complicated relationships and structure, but simple enough for us to work with and train it. So, even if the Internet, smartphones, etc. have made huge amounts of data available, we still need the right models to take advantage of this data.

And that's where deep learning comes in.

[ad_2]

Source link