Saturday , September 19 2020
Home / ARTIFICIAL INTELLIGENCE / Facebook will warn you if you have reacted to messages considered dangerous to health

Facebook will warn you if you have reacted to messages considered dangerous to health

A preview of the new notification Facebook will post to people who interacted with a message related to Covid-19, which was later deleted due to its dangerous nature. Facebook

Many Facebook users may be faced with a new notification in the coming weeks. If they liked, shared or commented on a Facebook post related to the Covid-19 pandemic, and this post was later deleted by Facebook's moderation teams due to its dangerousness, they will receive it in both weeks following a message encouraging them to learn more about the coronavirus crisis. This notification following the interaction will contain a link redirecting them to the official websites of health authorities (such as those of the WHO or ministries of health).

This evolution of Facebook's crisis response to content dealing with coronavirus, announced in a press release Thursday April 16, is also mentioned in a message posted directly by Mark Zuckerberg, the founder and leader of the social network. He welcomes the success of the portal “Coronavirus Information Center (Covid-19)” and the prevention and information messages highlighted by Facebook in the news feeds of its users. In all, more than 2 billion people on Facebook and Instagram have accessed official information from the health authorities, according to Facebook.

Facebook management is also pleased that “Hundreds of thousands of untrue content related to Covid-19” have been removed from the social network in recent weeks. Since January, in the context of the pandemic, Facebook seems to apply with particular zeal its policy of deleting messages, photos or videos posted by users or pages that may endanger the health or life of its users: early March, Mark Zuckerberg was already speaking to indicate that the company would focus its efforts against “False information and conspiracy theories identified by major health organizations”.

Among the messages deleted directly by Facebook when they are detected: “False claims about remedies or treatments” against the coronavirus or “Claims that social distancing does not help fight the spread of the coronavirus”, as stated in a press release posted on March 26. In France, this may have concerned in particular messages misleadingly indicating that treatment with hydroxychloroquine is effective against the coronavirus.

Read also Facebook, YouTube… major Internet platforms facing the challenge of the coronavirus

Succession policy

This suppression policy, applied directly by the moderators of Facebook, is done in parallel with the circuit set up to fight against false information, developed by the company with partners in recent years. The social network relies on this for verification work carried out by nearly 60 newsrooms and associations around the world; in France, The world and its fact-checking section, Decoders, are part of.

When these partners detect that a message on Facebook is potentially misleading, they affix a specific label to it, which has the consequence of degrading the visibility of these messages in the Facebook environment, and of posting a warning to the users who would fall of all even on it.

Facebook says Thursday it posted such warnings “Over 40 million publications” posted by users around the world during the month of March. They were generated “Based on 4,000 partner reviews” information verifiers. “When users see these warning labels, in more than 95% of cases, they go no further and do not consult the content”, adds the social network, for which it is a sign of efficiency.

Study denounces ineffective moderation

All of this remains insufficient, according to the NGO Avaaz, whose new report – also published on Thursday – is partly behind the new notification system announced by Facebook. This study, carried out by an organization specializing in online activism and having already studied disinformation on Facebook in other contexts (for example, that of the mobilization of “yellow vests” in France), shows that false or misleading information at The subject of the pandemic is still too much seen on Facebook.

To arrive at this observation, the NGO isolated 104 ” my information “ about Covid-19, broadcast in five languages ​​(English, French, Spanish, Italian and Arabic). According to Avaaz, these publications have been shared 1.7 million times and have resulted in approximately 117 million consultations: an estimate that the NGO considers to be conservative.

Among these 104 false information are for example fanciful indications on the elements which would protect an individual from the coronavirus (being black, using oregano oil or garlic decoctions …). But, among these messages, several dozen did not contain any warning even though, at times, the information in question had been precisely denied by at least one of Facebook’s partners in fact-checking.

In addition, 25 publications identified by Avaaz could be considered to pose a direct risk to Facebook users. Two were quickly removed. After being warned by the NGO, Facebook then deleted seventeen, and added indications that the information was false or misleading on two others. Facebook has not touched four of these posts.

“Not representative”

These findings are to be taken with tweezers as they were made on a tiny sample of misleading messages, compared to the volumes of content having been deleted. Facebook, in a press statement sent on Thursday, regrets that the messages studied by Avaaz are not “Not representative of what is happening on Facebook” : “Their discoveries do not reflect the work we have done on the subject”, reacts the company.

The NGO, for whom these 104 messages studied constitute only “The tip of the iceberg”, conversely considers that if it had had “A month or two more to analyze more content”, she would have “Found very similar results”, according to a spokesperson. Avaaz maintains that its work remains revealing of several phenomena:

  • False information relating to Covid-19 remains very present on the platform, despite the efforts and promises of managers;

  • Facebook's reaction time is too long: according to the NGO, it takes several days between the time when information appears, when it is verified and finally when the social network affixes the indication intended for its users;

  • The bad articulation between the social network and its fact-checking partners, which makes possible the presence on Facebook of content however contradicted by its partners.

Leave a Reply

Your email address will not be published. Required fields are marked *