Videos that are falsely altered or dishonestly altered will now be prohibited by Facebook’s user policies, the world's largest social network announced on Tuesday (January 7th) in a statement. Whether it's simple changes or deepfakes, these fake videos generated by software and which can be very elaborate, images of this type will now be prohibited unless they are parodies.
In particular, videos that dishonestly act or say words to a person are affected, says Facebook, when they are misleading. Social network moderators will also take into account the fact that the author of the video knowingly tried to pass it off as authentic.
Faced with the risks of the dissemination of Firefox and the circulation of fake videos, Facebook has recently invested in research on images generated by artificial intelligence, in particular with the aim of being able to better detect them automatically. In September, for example, the company launched an initiative (the Deepfake Detection Challenge) launched with other large digital companies and universities, to encourage advances in the creation of tools capable of quickly detecting modified video.
Such detection tools are already used by Facebook to detect fake user profile photos. In December, the social network announced that it had deleted many fake accounts, all created by the same network, which generated profiles using computer-designed images.