Facebook could be looking forward to get their hands on this latest "fake news" AI



[ad_1]

MIT has launched a new artificial intelligence capable of detecting "false news" at the source, undoubtedly intriguing social media companies eager for solutions.

In one of Facebook's latest public relations blunders, the company admitted that its algorithms had incorrectly reported a story highlighting the company's recent data breach as being spam because it was so widely shared.

While it's reasonable to assume that Facebook was not actively trying to remove the negative information about itself, it highlighted issues related to the artificial intelligence (AI) capability of Correctly identify what is and is not what is called "false information".

This is one of the reasons why Facebook has announced plans to have 20,000 human moderators by the end of the year to sort reports because its technology is not yet at the cutting edge of technology. .

Bad outlets will probably offend again

However, researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Institute for Computer Research (QCRI) believe they have found another way to detect suspicious items. .

Rather than focusing on the factuality of the news, the new artificial intelligence will analyze sources of information itself. The team claimed that this different approach allowed AI to accurately determine the truth of an article.

"If a website has already published false information, there is a good chance it will do it again," said Ramy Baly, lead author of a new document on the system. "By automatically collecting data about these sites, we hope our system can help you determine who will probably do it."

Baly said that the IA needed only about 150 items to determine whether it was possible to trust an information source, which means it would be able to eliminate problematic sources before items can spread.

Screen capture of an Infowars article with the triggered words highlighted for the AI.

MIT example highlighting triggers that AI interprets as coming from an unreliable source. Image: MIT CSAIL

What is its potential

The data was compiled from Media Bias / Fact Check, a website of fact checkers that analyzed the accuracy and bias of more than 2,000 news sites. It was then added to the algorithm, known as the Support Vector Machine Classifier (SVM).

When given a scrum, it was found that the algorithm was 65% accurate in detecting whether it had a high, low, or medium level of factuality; and was about 70pc accurate to detect if she is leaning left, right or moderate.

The system also found correlations with a point-of-sale Wikipedia page, assuming that the longer it was, the more legitimate it was, as well as identifying factors in the structure of the URL of a source. If it included many special characters and complicated subdirectories, it was considered less reliable.

Preslav Nakov, chief scientist of QCRI and co-author of the article, said about his potential: "If the outlets report differently on a given subject, a site like PolitiFact could instantly view our scores of" false information 'for these outlets so validity to give to different perspectives. "

[ad_2]
Source link