Instagram still hosts self-harm images after Molly Russell inquest verdict | instagram
Instagram is breaking its promise to remove posts that glorify self-harm and suicide years after the death of schoolgirl Molly Russell, Observer analysis found.
The photo-sharing app has long claimed it doesn’t allow content that “promotes or glorifies self-harm or suicide” and says it removes such content.
In February 2019, following the death of 14-year-old Molly in 2017, he said he would no longer allow any content depicting graphic self-harm, such as the cut. Eight months later, he extended that ban to cover cartoons and cartoons after an appeal from Molly’s father, Ian.
But simple keyword searches on the platform over the past week showed that many such images remain online. The posts include photos uploaded from the September inquest into Molly’s death, which concluded she died ‘from an act of self-harm while suffering from depression and the negative effects of online content’ . Other images were posted several years ago but were not detected or removed by review teams or platform technology.
Many posts were posted under hashtags, including slight misspellings of prohibited or restricted searches. Users searching for banned terms without the vowels, for example, saw a stream of posts that seemed to go against Instagram’s community guidelines.
Several keywords related to self-harm and suicide led to users receiving a warning saying, “Can we help you? Get help.” But directly below, users could click “view messages.” Top results for a common misspelling of a banned term included a photo of blood splattered on a tiled floor and a photo of a coffin with words on top: “Now everyone loves me.”
Another top result showed a cartoon depiction of someone who had committed suicide – a clear violation of Instagram’s rules. It was posted in April 2018, five months after Molly’s death, and remained online until this weekend when it was deleted after it was flagged on Instagram by the Observer.
In other cases, users searching for banned terms saw a list of alternative hashtags they could try searching for instead. Some of the suggestions were for keywords associated with recovery and self-harm prevention, but some weren’t. Under a suggested label, the top posts, which have since been deleted, included images that appear to normalize or glorify self-harm, such as a meme describing fresh cuts as “perfect” and a picture of a lighter with the words: ” We’re all addicted to something that takes the pain away.
Instagram says it allows some content related to self-harm and suicide on the advice of experts because it doesn’t want to exclude people in need of help or prevent them from expressing how they feel. It says it shows pop-up support messages, hides posts that may be sensitive, and promotes recovery and prevention content.
But the findings suggest the platform, which allows users as young as 13, is failing to effectively enforce its own policies banning material that promotes or glorifies self-harm, and raises concerns about the effect on those who see it.
Anna Edmundson, Head of Policy at the NSPCC, said: “It is absolutely shocking that this appalling content can still be found on Instagram and that tech companies continue to get away with playing by their own rules. It is insulting to the family of Molly Russell and other young people and their families who have suffered the most horrific damage from this content that it is still available.
The inquest into Molly’s death, at North London Coroner’s Court last month, heard she had saved, liked or shared 2,100 pieces of content relating to suicide, self-harm and depression in the six months prior to his death, including drawings, memes and graphics. pictures. She last used her phone to access Instagram at 12:45 a.m. the day she died, the court heard. Two minutes earlier, she had saved an image that carried a depression-related slogan.
On Friday, Molly’s father Ian Russell called on the government to take urgent action to regulate online platforms, telling the BBC there was “no time to waste”. A long-awaited Online Safety Bill is expected to return to Parliament in the near future.
Critics have claimed that the bill in its current form focuses too much on controlling types of content rather than the “algorithmic systems and design features” that underpin the biggest platforms. Dr Susie Alegre, a human rights lawyer, said it “not only threatens freedom of expression… but also does not do enough to address the real drivers of online harm, such as than social media companies actively recommending posts about self-harm”.
Meta, Instagram’s parent company, said it removed the infringing posts reported by the Observer. A spokesperson said: “Our thoughts are with the Russell family and all those affected by this tragic death. We are committed to ensuring Instagram is a positive experience for everyone, especially teens, and we will carefully review the coroner’s full report when provided.
A spokesperson added that the platform blocks hashtags that are inherently against its policies. She said people would always try to bend the rules by using deliberate misspellings, adding that her work on this had “never been done”.
In the UK, the youth suicide charity Papyrus can be contacted on 0800 068 4141 or email [email protected] In the UK and Ireland, the Samaritans can be contacted on 116 123 or by email at [email protected] or [email protected] In the United States, the National Suicide Prevention Lifeline is 1-800-273-8255. In Australia, the Lifeline crisis helpline is 13 11 14. Other international helplines are available at www.befrienders.org. You can contact mental health charity Mind by calling 0300 123 3393 or visiting mind.org.uk