Leading IT Scientists Debate Next Steps for AI in 2021



[ad_1]

The 2010s have been huge for artificial intelligence, thanks to advancements in deep learning, a branch of AI that has become feasible due to the growing ability to collect, store and process large amounts of data. Today, deep learning is not only a subject of scientific research, but also a key part of many everyday applications.

But a decade of research and application has made it clear that in its current state, deep learning is not the final solution to solving the ever-elusive challenge of creating AI at the human level.

What do we need to take AI to the next level? More data and bigger neural networks? New deep learning algorithms? Approaches other than deep learning?

It’s a topic that has been hotly debated in the AI ​​community and was the subject of a discussion online Montreal.AI last week. Entitled “Debate 2 on AI: Advancing AI: An Interdisciplinary Approach”, the debate brought together scientists from various backgrounds and disciplines.

Hybrid artificial intelligence

Cognitive scientist Gary Marcus, who co-hosted the debate, reiterated some of the major shortcomings of deep learning, including excessive data demands, poor ability to transfer knowledge to other areas, opacity and a lack of reasoning and representation of knowledge.

Marcus, who openly criticizes deep-learning-only approaches, published an article in early 2020 in which he suggested a hybrid approach combining learning algorithms and rule-based software.

Other speakers also highlighted hybrid artificial intelligence as a possible solution to the challenges facing deep learning.

“One of the key questions is to identify the building blocks of AI and how to make AI more reliable, explainable and interpretable,” said computer scientist Luis Lamb.

Lamb, who is co-author of the book Neuro-symbolic cognitive reasoning, proposed a fundamental approach for neural symbolic AI based on both logical formalization and machine learning.

“We use logic and knowledge representation to represent the reasoning process that [it] is integrated with machine learning systems so that we can also effectively reform neural learning using deep learning machines, ”said Lamb.

Evolution inspiration

Fei-fei Li, professor of computer science at Stanford University and former chief artificial intelligence scientist at Google Cloud, pointed out that in the history of evolution, vision has been one of the main catalysts for the emergence of intelligence in living beings. Likewise, work on image classification and computer vision helped spark the deep learning revolution of the past decade. Li is the creator of ImageNet, a dataset of millions of tagged images used to train and evaluate computer vision systems.

“As scientists, we ask ourselves what is the next north star?” Li said. “There is more than one. I was extremely inspired by evolution and development. “

Li pointed out that intelligence in humans and animals emerges from active perception and interaction with the world, a property that is sorely lacking in current AI systems, which rely on data maintained and tagged by humans.

“There is a fundamentally critical loop between perception and actuation that leads to learning, understanding, planning and reasoning. And this loop can be best achieved when our AI agent can be embodied, can choose between exploration and exploitation actions, is multimodal, multitasking, generalizable and often social, ”she said.

In his Stanford lab, Li is currently working on creating interactive agents that use perception and actuation to understand the world.

OpenAI researcher Ken Stanley also discussed lessons learned from evolution. “There are properties of evolution in nature that are so deeply powerful that are not yet explained by an algorithm because we cannot create phenomena like what was created in nature,” Stanley said. “These are properties that we should continue to research and understand, and they are properties not only in evolution but also within ourselves.”

Reinforcement learning

Computer scientist Richard Sutton pointed out that, for the most part, AI work lacks a “computational theory,” a term coined by neuroscientist David Marr, renowned for his work on vision. Computational theory defines the goal sought by an information processing system and why it pursues this goal.

“In neuroscience, we lack a high level understanding of the purpose and goals of the mind in general. This is also true in artificial intelligence – perhaps more surprising in AI. There is very little computational theory in Marr’s sense of AI, ”Sutton said. Sutton added that textbooks often define AI simply as “getting machines to do what people do” and that most of today’s conversations about AI, including the debate between neural networks and systems symbolic, relate to “the way you do something, as if we already understand what it is that we are trying to do.”

“Reinforcement learning is the first computational theory of intelligence,” said Sutton, referring to the branch of AI in which agents are given the ground rules of an environment and must discover ways to maximize their reward. “Reinforcement learning is explicit on the objective, on the what and the Why. In reinforcement learning, the goal is to maximize an arbitrary reward signal. To that end, the agent must calculate a policy, a value function and a generative model, ”Sutton said.

He added that the field needs to further develop an agreed computational theory of intelligence and said reinforcement learning is currently the prime candidate, although he acknowledged that other candidates might be worth considering. explored.

Sutton is a pioneer of reinforcement learning and author of a founding manual on the subject. DeepMind, the AI ​​lab where he works, is deeply invested in “deep reinforcement learning,” a variant of the technique that integrates neural networks into basic reinforcement learning techniques. In recent years, DeepMind has used deep reinforcement learning to master games such as Go, Chess, and StarCraft 2.

While reinforcement learning has striking similarities to the learning mechanisms in human and animal brains, it also suffers from the same challenges that affect deep learning. Reinforcement learning models require extensive training to learn the simplest things and are strictly limited to the narrow area they are trained on. At the moment, the development of deep reinforcement learning models requires very expensive computational resources, which limits research in the field to companies with deep pockets such as Google, which owns DeepMind, and Microsoft, the near-bottom. owner of OpenAI.

Integrate knowledge of the world and common sense in AI

Computer scientist and Turing Prize winner Judea Pearl, best known for her work on Bayesian networks and causal inference, stressed that AI systems need knowledge of the world and common sense to use it most effectively. the data provided to them.

“I think we should build systems that combine world knowledge and data,” Pearl said, adding that AI systems based solely on the indiscriminate collection and processing of large volumes of data are doomed to failure.

Knowledge doesn’t emerge from data, Pearl said. Instead, we use the innate structures of our brains to interact with the world, and we use data to interrogate and learn about the world, as witnessed by newborns, who learn a lot without being explicitly informed.

“This type of structure must be implemented externally to the data. Even though we may miraculously learn this structure from data, we still need to have it in a form that is communicable to humans, ”Pearl said.

University of Washington professor Yejin Choi also highlighted the importance of common sense and the challenges that its absence presents for today’s AI systems, which focus on matching input data and results.

“We know how to solve a dataset without solving the underlying task with deep learning today,” Choi said. “This is due to the significant difference between AI and human intelligence, especially knowledge of the world. And common sense is one of the fundamental missing pieces. “

Choi also pointed out that the space of reasoning is infinite and that reasoning itself is a generative task and very different from the categorization tasks to which current deep learning algorithms and assessment criteria are suited. “We never list much. We just reason on the fly, and that will be one of the main fundamental intellectual challenges we can think of in the future, ”said Choi.

But how do you achieve common sense and reasoning in AI? Choi offers a wide range of parallel research areas, including the combination of symbolic and neural representations, the integration of knowledge into reasoning, and the construction of benchmarks that are not just categorization.

We still don’t know the full path to common sense, Choi said, adding, “But one thing is for sure, is that we can’t just make it happen by building the tallest building in the world higher. Therefore, GPT-4, -5 or -6 cannot cut it. “

VentureBeat

VentureBeat’s mission is to be a digital city for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in running your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the topics that interest you,
  • our newsletters
  • Closed thought leader content and discounted access to our valuable events, such as Transform
  • networking features, and more.

Become a member

[ad_2]

Source link