[ad_1]
It’s an issue currently preoccupying some of the world’s greatest minds, from Bill Gates to Elon Musk.
SpaceX and Tesla CEO Elon Musk described AI as our “ greatest existential threat ” and likened its development to “ summoning the demon. ”
He thinks super intelligent machines could use humans as pets.
Professor Stephen Hawking said he was “almost certain” that a major technological disaster would threaten humanity in the next 1,000 to 10,000 years.
They could steal jobs
According to a 2016 YouGov survey, more than 60% of people fear that bots will reduce the number of jobs in the next ten years.
And 27% predict it will reduce the number of jobs “a lot”, with previous research suggesting that government and service workers will be hit hardest.
As well as posing a threat to our jobs, other experts believe AI could “ go rogue ” and become too complex for scientists to understand.
A quarter of those polled predicted that robots will be a part of everyday life in just 11 to 20 years, and 18% predict that it will happen in the next decade.
They could “ go rogue ”
Computer scientist Professor Michael Wooldridge said AI machines could become so complex that engineers don’t fully understand how they work.
If the experts don’t understand how AI algorithms work, they won’t be able to predict when they will fail.
This means driverless cars or smart robots could make unpredictable decisions “ out of character ” during critical times, which could put people at risk.
For example, the AI behind a driverless car might choose to squeeze through pedestrians or crash into barriers instead of deciding to drive sensibly.
They could wipe out mankind
Some people believe that AI will wipe out humans completely.
“Eventually, I think human extinction is likely to happen, and technology is likely to play a role in that,” DeepMind’s Shane Legg said in a recent interview.
He named artificial intelligence, or AI, as the “number one risk for this century”.
Musk warned that AI was more of a threat to humanity than North Korea.
“If you’re not concerned about the security of AI, you should be. Much more risk than North Korea, ” the 46-year-old wrote on Twitter.
“No one likes to be regulated, but everything (cars, planes, food, medicine, etc.) that presents a danger to the public is regulated. AI should be too.
Musk has always advocated for governments and private institutions to enforce regulations on AI technology.
He argued that controls are needed to prevent machines from advancing beyond human control.
[ad_2]
Source link