“It seems to me that we should have more responsibility than the operators have, but on the other hand, we cannot be treated like media either. There are more than 100 million messages posted daily on Facebook; the media model would be inapplicable. “ While in Brussels on Monday February 17, where he was to meet several European commissioners, the founder of Facebook, Mark Zuckerberg, quickly presented his vision of a new framework for companies like his, and of their responsibility concerning the content that they host. Criticized on all sides, alternatively for a moderation deemed too lax or too severe, too permissive or detrimental to freedom of expression, the social network calls for a ” new model “.
Few regulators will contradict Mr. Zuckerberg on this point: for several years, the size reached by a few companies like Facebook and Google has upset the classic distinction between content “hosts” of content, with limited responsibilities, and “publishers” of content, responsible for everything they publish. At European level, the subject is at the heart of discussions on the future e-commerce directive. But if the fact that Facebook or Google deserve a hybrid status is almost unanimous in Europe, remains to be defined.
Obligation of means
It is, in hollow, the objective of a “white paper” – discussion paper – made public by Facebook on February 17. On ten pages, the text details the main questions that arise, according to Facebook, to define the responsibility of these major players in a number of cases. On the balance to be struck between effective moderation and protection of freedom of expression, Facebook thus pleads for an obligation of means: “What we are proposing is a new model, with a systems approach”, detailed Mr. Zuckerberg on Monday.
In practice, Facebook advocates that “Internet companies have an obligation to set up certain procedures and systems”, specifies the document. In this approach, the web giants will have to “Achieve certain quantified goals in the fight against content that does not respect their rules”. Facebook also wants to be clearly defined “Specific forms of speech which, while not illegal, may be prohibited online”. And recalls in the preamble that the application of moderation rules “Can never be perfect”, due to the complexities of human language.
Automation of moderation
This approach precisely reflects that implemented by Facebook, internally, for several years. The social network has invested heavily in automated tools, with the goal of erasing illegal content before it has even been seen by human users. The company now claims to suppress almost all of the Islamic State’s propaganda messages in this way. But for other types of content, such as hate speech or harassment incitement, for which the context is very important, the figures are much lower, but constantly increasing, says Facebook, which also uses dozens thousands of human moderators.
The generalization of this approach would however raise several difficulties. On the one hand, the numbers presented by Facebook – and other social networks with the same vision – are not audited by any outside source, making Facebook its own reviewer. In a column published by the Financial times this Monday, Zuckerberg recognizes that Facebook may need to “More supervision”. Furthermore, the obligations of means could be considered by some regulators as an obstacle to free competition – it took Facebook a lot of time and investment to develop the moderation technology that the company currently uses, and it it would be difficult for a new competitor to imitate it quickly.
“Start of a discussion”
Above all, the recommended obligation of means does not really resolve the thorny question of what constitutes a call to hatred or violence, for example. “We are a private company, it is not up to us to decide what, for example, candidates in an election have the right to say”, said Zuckerberg on Monday in Brussels, taking up an argument he has repeated many times in recent months. It’s up to legislators to create the rules, Facebook believes, while recommending in its “white paper” to “Create rules that are applicable, including on a large scale, even when the context is limited”.
The document poses many – good – questions but ultimately provides few answers, as recognized in its conclusion, entitled “This is the start of a discussion”. The discussion, however, is not entirely new, and some lawmakers have already decided, on certain points, in a different direction from what Facebook advocated. In France, the Avia law on hate messages online provides in particular an obligation for large platforms to delete illegal messages within twenty-four hours. Facebook, for its part, believes that the moderation period is a bad criterion, and that it is above all the virality of a message that makes it dangerous. An argument that did not convince French parliamentarians, despite unprecedented collaborative work between the social network and deputies, in preparation for the text.