Instagram removed 80 PER CENT less graphic content about suicide
Instagram removed 80 PER CENT less graphic content about suicide during the first three months of lockdown after ‘most of its moderators were sent home due to Covid rules’
- Plunge in removals for April to June revealed in social media company’s figures
- Instagram owner Facebook said it prioritised removal of most harmful content
- Head of child online safety at NSPCC criticised moderators for not doing enough
Instagram removed 80 per cent less graphic content about suicide during the first three months of lockdown after most of its moderators were sent home due to lockdown restrictions.
The plunge in removals for April to June this year was revealed in the social media company’s figures. The number rose back up to pre-pandemic levels after restrictions lifted.
Instagram owner Facebook said it had prioritised the removal of the most harmful content.
Molly Russell who took her own life in 2017 after viewing harmful images on Instagram
Andy Burrows, head of child safety online policy at the NSPCC, said Facebook still had not done enough to protect young people in particular.
‘Facebook’s takedown performance may be returning to pre-pandemic levels but young people continue to be exposed to unacceptable levels of harm due to years of refusal to design their sites with the safety of children in mind,’ he said.
‘The damage incurred from the steep reduction in taking down harmful content during the pandemic, particularly suicide and self-harm posts on Instagram, will undoubtedly have lasting impacts on vulnerable young people who were recommended this content by its algorithms.
‘The Government has a chance to fix this by delivering a comprehensive Online Harms Bill that gives a regulator the powers it needs to hold big tech companies to account.’
The latest figures show Facebook removed more than 12 million pieces of misinformation content related to Covid-19 between March and October this year.
The firm’s new Community Standards Enforcement Report showed that the millions of posts were taken down because they included misleading claims, such as fake preventative measures and exaggerated cures, which could lead to imminent physical harm.
During the same time period, Facebook said it added warning labels to around 167 million pieces of Covid-19 related content, linking to articles from third-party fact-checkers which debunked the claims made.
And while Facebook said the pandemic continued to disrupt its content review workforce, it said some enforcement metrics were returning to levels seen before the coronavirus outbreak.
This was put down to improvements in the artificial intelligence used to detect potentially harmful posts and the expansion of detection technologies into more languages.
For the period between July and September, Facebook said it took action on 19.2 million pieces of violent and graphic content, up more than four million compared to the previous quarter.
Facebook owner Mark Zuckerberg testifying at the US Congress
In addition, the site took action on 12.4 million pieces of content relating to child nudity and sexual exploitation, a rise of around three million the previous reporting period.
3.5 million pieces of bullying or harassment content were also removed during this time, up from 2.4 million.
On Instagram, more than four million pieces of violent graphic content had action taken against them, as well as one million pieces of child nudity and sexual exploitation content and 2.6 million posts linked to bullying and harassment, an increase in each area.
The report added that Instagram had taken action on 1.3 million pieces of content linked to suicide and self-injury, up from 277,400 in the last quarter.
It also showed Facebook had carried out enforcement against 22.1 million posts which were judged to be hate speech, with 95% of those proactively identified by Facebook and its technologies.
Guy Rosen, vice president of integrity at the social network, said: ‘While the Covid-19 pandemic continues to disrupt our content review workforce, we are seeing some enforcement metrics return to pre-pandemic levels.
Instagram owner Facebook said it had prioritised the removal of the most harmful content
‘Our proactive detection rates for violating content are up from Q2 across most policies, due to improvements in AI and expanding our detection technologies to more languages.
‘Even with a reduced review capacity, we still prioritise the most sensitive content for people to review, which includes areas like suicide and self-injury and child nudity.’
Facebook and other social media firms have faced ongoing scrutiny over their monitoring and removing of both misinformation and harmful content, particularly this year during the pandemic and in the run-up to the US presidential election.
In the UK, online safety groups, campaigners and politicians are urging the Government to bring forward the introduction of its Online Harms Bill to Parliament, currently delayed until next year.
The Bill proposes stricter regulation for social media platforms with harsh financial penalties and potentially even criminal liability for executives if sites fail to protect users from harmful content.
Facebook has previously said it would welcome more regulation within the sector.
Mr Rosen said Facebook would ‘continue improving our technology and enforcement efforts to remove harmful content from our platform and keep people safe while using our apps’.