Squirrel AI Learning participated in the 2019 IA International Fair & Big Data in London



[ad_1]

The themes of this exhibition are: Data Analysis on Artificial Intelligence, Artificial Intelligence and the Internet of Things, Solutions for Big Data Companies and AI Technology Solutions.

Dr. Wei Cui from Squirrel AI Learning: How does AI provide an economic and individualized education to each family China

Dr. Wei Cui, chief scientist of Squirrel AI Learning, presented the AI ​​adaptive learning system independently developed by Squirrel AI Learning. This system can continuously monitor and evaluate students' individual abilities, identify their learning weaknesses and allow them to progress at their own pace to improve their learning. The system provides optimized learning solutions and simultaneous advice to maximize the effectiveness of learning and improve the ability of students to acquire knowledge and skills.

For years, the lack of experienced teachers and geographical problems have hindered the popularization of quality education in schools. China. Squirrel AI hopes to train "super teachers" through artificial intelligence and provide personalized education to students.

In his speech, Dr. Wei Cui introduces the core technologies used to build the system and conduct benchmarking experiments. The adaptive learning engine of Squirrel AI Learning includes three layers of architecture: the ontology layer, the algorithm layer and the interactive system. Content-based, the ontology layer incorporates learning maps and knowledge maps. Squirrel AI Learning has independently developed the technology for disbadembling knowledge points at the super-nano level, which makes it possible to determine more precisely the knowledge points that students are supposed to master. Let's take the example of junior high school math. Squirrel AI Learning can break down the 300 knowledge points into 30,000.

The algorithm layer includes the content recommendation engine, the user portrait engine, and the student target management engine. Based on the User Status Assessment Engine and the Knowledge Recommendation Engine, Squirrel AI Learning will build a data model to identify each student's knowledge gaps accurately and efficiently, and then recommend educational content corresponding to these gaps.

The interactive system collects interactive data to learn more about students and improve the algorithm. Squirrel AI Learning collaborated with the Stanford Research Institute to study the machine-student interactive system. His MIBA Student Behavior Data Acquisition System, developed by him, won a grand prize at the World Conference on AI.

In addition, the MCM system developed by Squirrel AI Learning can disbademble the students' thought patterns, abilities, and learning methods, and then allow them to train in a single subject, based on their learning status.

Earlier this year, Squirrel AI Learning created nearly 2,000 learning centers in more than 300 cities around the world. China with nearly 2 million students enrolled. Last year, Squirrel AI donated a million free education accounts to underprivileged families to promote equity in the field of education.

Marc TeerlinkSAP Global Vice President: How to Achieve the "Gold Rush" in the Age of Artificial Intelligence

Marc TeerlinkSAP's global vice president, explained how companies are adapting to the AI ​​era. The world is currently on the brink of a "gold rush" with AI. Teerlink's talk explains how AI and machine learning enrich businesses and enable the sharing of experiences between current industry leaders.

SAP estimates that by 2030, more than 60% of jobs will face big changes. About 51% of the work will be automated, but only 5% will be completely machine-made. Therefore, Teerlink prefers to call this era the era of "augmented intelligence" in which technology is used to enhance the human ability to process information.

Teerlink said that partners using SAP's machine-learning software have translated algorithms into business benefits. He cited an example: VALE, a global mining company based in Brazil, has used machine learning to optimize its application process for purchasing.

The current process is a purely manual process in which scattered information is distributed across multiple files and systems. As a result, 25 to 40 percent of purchase requests are rejected monthly due to errors leading to a significant recovery.

In recent years, VALE has begun using the SAP Leonardo open innovation framework based on design and technology to define a redesigned application process that provides an SAP Fiori application accessible from any device to help users to perform the task from end to end. process without connecting to any back-end device system.

Machine learning for image recognition is at the heart of this process. The image recognition algorithm is integrated with the Leonardo SAP machine learning application so that service technicians can recognize the serial number of the materials of all parts to be replaced by taking them in picture. Even without access to the Internet, technicians can still take pictures of parts and finish the purchase request process later.

Once the parts identified, the application connects to the backend system and finds the correct supply process for the article, that it is a contractual process. or from an application to purchase, then the application automatically completes the purchase requisition process and checks whether the parts were requested on the previous item or exist in nearby deposits.

This streamlines the purchase requisition process, reduces the delivery cycle of purchases, reduces the stock of spare parts, thus reducing working capital and improving work efficiency.

Companies like VALE, who have been practicing IA for a long time, have already tasted sweetness. SAP observes that these companies generally have the following characteristics: strategic center of senior executives, increased differentiation of competition, new revenues and profitability, and strategies covering the entire domain. They consider all data as important badets.

Dave Palmer, Technical director of Darktrace: How does AI affect cybercrime?

Darktrace is a UK start-up specializing in network security that primarily provides a "corporate immune system" that can be deployed in a corporate network to monitor network anomalies. When suspicious behavior occurs within the network, Darktrace will call it back to IT managers and, if necessary, automatically trigger protective behavior to mitigate network attacks. Unlike traditional rules-based or signature-based methods, this automated technology allows security teams to focus on high-value tasks and even combat fast-paced automated attackers.

Dave Palmer believes that artificial intelligence will greatly increase the impact of cybercrime on businesses. Due to the free computing environment and the introduction of various APIs and SDKs by large companies, even people with no relevant background can easily acquire and use artificial intelligence technology such as recognition facial and voice recognition, which significantly lowers the threshold of cybercrime.

Unlike systems for implementing viruses or malware on the system, network attacks now take many forms and are becoming more widespread. For example, stealing your data for blackmail, monitoring important meetings of competing companies or modifying your data from below to influence the decision-making of your superiors, etc.

As a result, many companies are deeply committed to protecting network security. For example, Microsoft launched its cloud-based security risk detection tool in 2017 with which developers find bugs and other software security vulnerabilities to publish or use. The tool is designed to fix bugs before software vulnerabilities occur.

Anand Mariappan, Senior Director, Reddit: Reddit Machine Learning Development History

Anand Mariappan, senior director at Reddit in charge of research engineering and machine learning, presented the company's history, ongoing projects and future directions. Reddit & # 39; s machine learning covering data platforms, flow ranking, recommendations, similarities of users and channels.

Reddit, the American version of "Tianya" and "Baidu Tieba". According to data published by Alexa, Reddit is the fifth largest website of United States, ranked 14th worldwide and even above Facebook in terms of traffic. Reddit currently has 330 million active users, nearly 140,000 active communities, 12 million publications and nearly 100 million comments per month. In February of this year, Tencent invested $ 150 million in Reddit. Reddit is currently rated $ 3 billion.

Reddit has built and upgraded data pipelines in recent years. Since 2014, he has been using Amazon S3 and Hive to gradually build a MIDAS-based tiered database architecture. This architecture is now based on Google's BigQuery.

Reddit has 140,000 sub-Reddit, which can also be understood as channels. The recommendation of relevant channels to users is an important way to increase user participation, which has been done via manual selection. Now, deep learning replaces manual selection. Through in-depth learning, Reddie can directly gather all comments from a channel into a file, and then use the end-to-end doc2vec template to train and obtain semantic information to facilitate matching.

Reddit has also optimized the recommendation on the home page. It uses a large-scale logistic regression algorithm to formulate custom content recommendations based on parameters such as time, channel, user interest and device.

Mariappan explained that Reddit is currently developing machine learning programs to optimize custom models. He achieved amazing results early in his development using models on TensorFlow to improve the quality of content recommendations.

Ben Dias, Head Royal Mail: from zero to data science

Ben Diashead of the badysis and data science at Royal Mail, shared his experience of "information science to the database" and summarized seven key points, in the hope of & nbsp; & nbsp; & nbsp; & nbsp; & nbsp; & nbsp; & nbsp; & nbsp; & nbsp; Help companies accelerate their progress in the development of data science by providing practical skills, tools and technologies.

First, be ready. Companies must first understand each other and be fully prepared in the areas of data processing, standard decision badysis, the underlying architecture and the technology stack.

Second, focus more on retention of talent than on recruitment. Do not rush to seek out talent outside the company. Instead, companies need to train and retain talent and develop an appropriate office culture.

Third, do not hire "super chicken". Super Chicken refers to extremely talented and motivated employees. Margaret Heffernan, an expert in business management consulting, pointed out in a TedTalk that a team of geniuses would not be more effective, but would rather have a disastrous performance. Successful teams do not need superstars, but collaborative staff working on a consensus basis.

Fourth: do not put all the eggs in one basket. Companies need to look comprehensively at short-term benefits, medium-term considerations and long-term planning.

Fifth, adopt the model of Lean Startup. Lean Startup is a business and product development method designed to shorten the product development cycle and quickly determine whether the proposed business model is achievable. This model is achieved through a combination of experiments based on commercial badumptions, iterative product publishing and proven learning.

Sixth and seventh: you must change everyone and everything and apply scientific methods to everything.

Gilles Comau, Head from Just Eat AI: Challenges and opportunities of building custom strategies.

Gilles Comau is Director of Machine Learning and AI at Just Eat. Just Eat, founded in 2001, is a takeaway order site at Denmark. It provides applications that allow consumers to easily place orders and make payments. Now he is present in many countries of the world. In 2014, Just Eat was listed on the London Stock Exchange with a market value of $ 2.4 billion.

Comau's speech focuses on the challenges and opportunities of building custom strategies. Delivery services involve millions of similar but very different products. Delivery areas also have geographical limits, which must be optimized via algorithms.

The Big Data badysis of Just Eat allows to predict what kind of food users will order at a given time. For example, the mbadive data generated by Just Eat allows badysts to predict which regions are most likely to order healthy foods, and which regions prefer to collect food on delivery.

The results of the mbadive data badysis of habits and eating habits of users will be communicated to restaurants to help them meet various needs and increase menu items. This can help them grow their business.

Just Eat has more than 60 million accounts and at least 7.5 million people have multiple accounts. Therefore, Just Eat must use data science to delete repeated accounts and link users with similar attributes.

The correspondence between restaurants and users helps users find delicious food more easily. The attributes of the restaurant are primarily based on the foods that it primarily recommends, including flavor, attributes, taste and food ingredients. The profile of the user is determined by ordering habits, preferences, social attributes, trading habits, contact information, and so on.

These attributes will help Just Eat create a two-dimensional, visual vector search space. When they are looking for delicious food with the help of keywords, users can get what they want by simply evaluating the vector to which their keywords are closest.

The machine learning is also used to quickly deliver orders to customers through the prediction of drivers' journeys and the improvement of communication efficiency in order to quickly deliver food products, maintain the correct delivery order and avoid misplaced delivery orders.

SOURCE Learning squirrel

[ad_2]
Source link