As the virus evolves, the company seeks the Oversight Board’s opinion on the need to formulate a new content moderation strategy.
By Ayush Aditya
Facebook has been flooded with misinformation, unscientific claims and rumours about COVID-19, forcing the global technology giant to remove 25 million pieces of violative content as part of its content moderation policy. Now, as the virus is evolving, the company is considering whether it still needs to police content.
Meta, Facebook’s parent company, has approached its Oversight Board, seeking advice on whether the current policy is still appropriate.
“Now the COVID-19 situation has evolved, we’re seeking the Oversight Board’s opinion on whether we should change the way we address this type of misinformation through other means, like labeling or demoting it,” Meta wrote in a blog post,
The company has created an independent Oversight Board which comprises 20 outside experts and civic leaders. The board will draft policies, develop guidelines, formulate regulations and issue recommendations on the content regulation. Its mandate is to re-evaluate the appropriateness of Meta’s misinformation policy to suggest if the company needs to change its approach. That is, should Meta directly start labelling and removing content through its third-party fact-checking program?
The board has more teeth now that its decisions on content regulation are binding on Meta. The social media company has seven days to implement any rulings after a decision is announced.
Over the last two years, COVID-19 related misinformation has posed a unique risk to public health and safety. In the early stages of the outbreak in January 2020, Meta broadened the ambit of its misinformation policy to ensure user safety without inhibiting their rights to express themselves on this crucial issue. Prior to this, Meta only removed false content in consultation with local agencies, with expertise in how specific piece of content (like a particular post) may disrupt public order.
The new post-Covid policy for the first time allowed for the global removal of the entire category of erroneous claims. Meta deleted false claims about masking, social distancing, the transmissibility of the virus, and claims promoting vaccine hesitancy. Currently, Meta’s policy provides for the removal of 80 distinct categories of false claims about COVID-19 and vaccines.
According to Meta, it is still dedicated to dispelling myths around COVID-19 and supplying verified facts. This is the reason it has asked for advice from the Oversight Board to get feedback on its strategies for combating COVID-19 misinformation, and to reassess the effectiveness and futuristic usage of its policies that were put in place at the beginning of the pandemic.
Meta believes that with the establishment of its COVID-19 Information Centre, it is easier to access advice from public health authorities. Now, over two billion users from 189 countries access reliable information on COVID-19 from the Information Centre. Even though the pandemic is receding, it has a differential impact on countries across the globe, forcing Meta to relook at its policies to incorporate the divergent impact of the pandemic in its new policy on health misinformation.
Subscribe to our newsletter to get expert insights on health misinformation, updates about global trends, and inspiring initiatives to combat this public health challenge.