A.I. has a bias problem – this can be a big challenge in cybersecurity



[ad_1]

Portrait of a SoftBank robot.

Alain Pitton | NurPhoto | Getty Images

Intrinsically biased artificial intelligence programs can pose serious cybersecurity problems at a time when hacker attacks are becoming more sophisticated, experts at CNBC said.

According to Aarti Borkar, vice president of IBM Security, biases can appear in three areas: the program, the data and the people who design these artificial intelligence systems.

"One is the algorithm itself," she told CNBC, referring to lines of code that teach an artificial intelligence program to perform specific tasks . "Is he biased in the way he is approached and the result he's trying to solve?"

A biased program may end up focusing on the wrong priorities and risk missing the real threats, she said.

"If you try to solve the wrong result and it is skewed, your algorithm is too," Borkar said.

The role of artificial intelligence is growing in cybersecurity. Many CEOs view cyberattacks as the biggest threat to the global economy over the next decade.

Firewalls and antivirus are considered ancient tools as the digital threat is constantly evolving and hackers are now using more advanced technologies, such as AI, to launch complex attacks. against companies.

Once they are able to violate a system, many attackers keep a low profile, which makes it difficult for IT teams to detect their presence. Some would quietly search the network for sensitive data, while others would slowly change important information without anyone noticing it – a scenario that experts say may have serious consequences in the long run. .

To combat this type of situation, industry professionals need artificial intelligence to create security systems that can automatically respond to threats.

In fact, a study commissioned by the technology giant Microsoft revealed that 75% of the companies surveyed had adopted, or were considering, integrating AI into their cybersecurity plans.

Artificial Intelligence allows us to find out more quickly where the problems are, in situations where it would be difficult for humans to process all the data generated, said Diana Kelley, Cybersecurity Technology Lead at Microsoft. , at CNBC's Squawk Box.

Recognize prejudices

Artificial intelligence systems typically require large amounts of so-called learning data to learn how to use their functions.

If the data used is biased, then artificial intelligence will only understand a partial view of the world and make decisions based on that close understanding, Borkar said.

Similarly, if people who design the program come from a similar culture or environment and share the same ideas, cognitive diversity would be low.

"It's at this point that you start creating tunnel vision and echo chambers," she added.

Inherent bias can lead to situations in which AI systems mistakenly identify problems that can slow down the business process, generate trust issues, and affect a company's bottom line. . On a larger scale, this can also affect the brand of a company.

More worryingly, this can lead to situations in which the program may not identify a serious threat, Borkar added.

The threats to cybersecurity come from all sides, which makes them "naturally diverse" and impartial, according to her.

As a result, companies need AI systems that are both diverse and equitable – otherwise, something creeps in, so that's how vulnerabilities come in, that's how you miss something … because you are not as broad as the opponent, "she says.

Nevertheless, Microsoft's Kelley explained that it could be difficult to completely eliminate the inherent biases, and to minimize such instances "requires careful planning and supervision of the data provided" to the IA.

"It's not like the bad guys are waiting for us to learn how to do that, so the sooner we get there, the better we'll be," Borkar added.

[ad_2]
Source link