Twitter announced Tuesday, February 4, to start the fight against photos and videos “Falsified” in the wake of other social networks, summoned to take their responsibilities, especially during the campaign for the American presidential election.
The platform intends to focus on modified content (video or audio montages, edited images) which aim to deceive the public or risk harming people, by inciting violence or by infringing their freedom of expression, for example .
Tweets falling into these categories will be removed or tagged with a warning, starting in March. The network can also reduce the visibility of messages or add context.
Most major social networks have implemented measures combining artificial intelligence and human resources to fight against disinformation, from fake news to “Deep fakes” (fake hyperrealistic photos or videos).
They react in particular to pressure from European and American authorities, while manipulation campaigns carried out notably on Facebook in 2016 tried to influence opinion during major elections, such as the presidential election in the United States or the referendum on Brexit in the UK.
“This new rule is in addition to the many other existing rules” to regulate Twitter, said Yoel Roth, responsible for the integrity of the platform, at a press conference. “For example, we have been preventing for years the spread of images and videos of fabricated or falsified sexual content, which are very widespread on the internet”.
The Twitter network targets rigging, including audio or video, but does not directly attack spurious written messages, while it has also banned political advertisements.
How to manage satire?
A video montage like the one that lent racist candidate Joe Biden racist comments earlier this year should no longer be cited on Twitter. One of the sharing tweets in this video had been viewed over a million times.
“Whether you use advanced machine learning tools or an inexpensive app to slow videos down, our rules will apply to content, not how to make it”, said Yoel Roth.
To identify potentially problematic content, Twitter teams around the world rely in particular on reports, even if “We want to reduce the burden on users”, said Yoel Roth. He admitted that the satirical images and videos could give them a hard time. “If we are wrong, there will be an appeal procedure”, specifies the network.
Google’s video platform YouTube announced similar measures on “Manipulated or falsified content”, “In order to deceive the users” and “Who pose a flagrant risk of damage”. Facebook, for its part, still allows political ads, and even exempts them from its fact-checking system.