AI needs a certification process and not a legislation



[ad_1]

Artificial intelligence is quickly part of everyday life. Implementations of AI-based technologies in companies have tripled in 2018, according to Gartner. At the same time, consumer applications are ubiquitous and help us write our emails, discover new music and get customer support on demand. At every point of contact, our data is collected and used to make machines faster and smarter, prompting citizens around the world, governments, and businesses to want the deployment of algorithms to work. In-depth learning and machine are safe and ethical.

Although enforcing laws to protect consumers from artificial intelligence may seem like a reasonable proposition, this proposal is doomed to failure. One only has to look at the GDPR case to see how well-intentioned initiatives can become inapplicable laws. Even the AI ​​principles of the Organization for Economic Co-operation and Development (OECD) are unlikely to have a real impact because, as non-binding recommendations, non-compliance # 39; has no consequences.

The case against the intervention of the government

Governments simply do not have the bandwidth or the budget to deal with the complexities of AI. They are also notoriously slow. The pace at which artificial intelligence is changing is simply too fast for them to put in place effective legislation. Even the creation of "frames" that do not provide for any application is a futile exercise. This is the real threat of punishment that will help suppress bad actors. For example, most people avoid insider trading because they know there is a high risk that the SEC will discover the transgression and fine or imprison them for the offense. If insider trading was only part of the guidelines or could be done without punishment, the practice would be much more widespread.

Legislative bodies also lack the type of knowledge of the required space. We allowed the proliferation of the Internet, and then social media, to remain completely uncontrolled, simply because governments (with the exception of China) did not see the potential ramifications until it was too late. US Congressmen recently proved that they did not even understand how Facebook generated revenue. Can we really expect them to understand the nuances of complex neural networks?

Intergovernmental agencies are not the answer either. The example of the OECD and its principles of AI, which is rather a moral compbad suggested than anything else. It contains no technical details despite the participation of very clever members from academia and science and will not change the way organizations implement and / or develop AI.

A precedent for the solution

Neither legal regulation nor ethical guidelines will prevent the development of AI from going insane. This does not mean that there is no solution. In fact, the solution is much simpler than you think: Create an independent organization that can create standards and a certification program.

There are many precedents – for example, ISO 27001 and SOC 2 for information security management, protected by SSAE16 and ISAE 3402 financial reporting standards. These compliance measures are based on highly technical standards. which require companies to comply with specific pbadword protection measures, mobile phone security, data separation, firewall protection and many other more nuanced topics. Although non-certification is not subject to any legal sanction, certification is often a necessity for companies that want to engage with each other.

In the field of AI, I propose that technical experts, investors and space policy makers come together to create an independent, global governing body to define and enforce l & # 39; AI. Standards – which should be reviewed regularly with annual certification requirements – should set out specific requirements such as compliance to avoid errors in data sets, checks to ensure that the AI ​​is being used in an ethical and non-discriminatory way, the controls for automated manufacturing decision, and emergency measures to stop an AI machine. Even if no separate body is created, an existing regulatory agency such as the FASB, ISO, the National Institute of Standards and Technology (NIST) or the IASB (with the help of AI ethics and a major space rotation) should further intensify, before the same data and social media privacy mistakes reoccur.

Organizations that choose to deploy the AI ​​will be encouraged to obtain this certification because it is an approval stamp, and other companies / consumers will require it for the approval. To have business done. The application of the rules will be a product of the financial markets, as non-compliant companies will find that they have fewer markets to operate on, which reduces their volume of activity.

This approach has many advantages. It reduces the need for government intervention and inefficient regulation and increases business awareness of specific technical standards. This model has been proven and there are already precedents for compliance measures.

While governments can contribute and support standards, they do not need to waste time and resources developing standards for which they have no expertise or enforcement capabilities.

If you want change and you want to participate in the development of something new and impact, contact me so that we – technology leaders, investors and consumers – can work to create standards for business. IA to present to Senate caucus on AI. You can send me a message on LinkedIn or Twitter.

Abhinav Somani is the CEO of Leverton.

[ad_2]
Source link