Facebook publishes tools to report harmful content on GitHub



[ad_1]

Currently, when Facebook finds shocking photos and videos, it removes them and its algorithms give it a hash or a fingerprint. Its technology can then use these hashes to determine if two files are the same or similar, even without the original image or video. Thus, when multiple copies of terrorist videos, for example, appear online, Facebook has a better chance of locating them.

These algorithms, called PDQ and TMK + PDQF, will now be available to Facebook's industrial partners, smaller developers and non-profit organizations. The first, PDQ, is a photo-matching tool inspired by pHash, but built from scratch. The second, TMK + PDQF, is the corresponding video equivalent. It was developed by the artificial intelligence research team of Facebook and the University of Modena and Reggio Emilia in Italy. For those already using content matching technology, Facebook says that PDQ and TMK + PDQF can offer another layer of defense and allow different hash sharing systems to communicate with each other.

Facebook has announced the open-source tools as part of its hackathon on child safety and hopes that this technology will help protect children. It could be used in conjunction with Microsoft's cloud-based PhotoDNA tool and Google's content security API, both published for the purpose of protecting children. After discovering an alleged ring of child badgraphy on YouTube earlier this year, such tools could be more important than ever.

[ad_2]
Source link