An internet ban cannot protect your child from the endless harmful content of social media




This week, the Online Safety Bill returns to parliament – a vital piece of legislation that has suffered numerous delays and setbacks for various reasons.

While all legislation should be subject to rigorous scrutiny, the free speech narrative will undoubtedly dominate much of today’s discussion of online safety. Will we stifle free speech if we police online content more closely?

As one of the many bereaved parents who have lost a child to the harmful effects of online content, I would argue that no, we are not stifling free speech. We strive to protect our children and youth from aggressive algorithms that relentlessly serve the vicious cycle of negative content. The content we know is causing great distress, harm and tragically the death of my daughter Molly.

Free speech often justifies the existence of harmful content. But as a supporter of freedom of speech, I believe that it should not be confused with free content for all.

The notion that we are free to say what we want to say in our offline world is a misconception to begin with. Libel and libel laws are well established, and for good reason—they exist to protect people within society from unfounded claims, and presumably because the absence of these laws is problematic.

But I find it interesting that when it comes to slander or libel, we easily accept the rules, but when it comes to risking personal injury and life, there are people who try to obstruct this law, which is an attempt. censorship of free speech.

Elon Musk’s talk of free speech and reintroducing previously banned accounts to Twitter shows the dangers of this naïve and idealistic approach. Free speech is not black and white, it is more nuanced than that.

More than three-quarters of people surveyed had seen self-harming content online by the age of 14, with some aged 10 or younger, a recent Samaritans report said. So it is clear that something needs to be done to protect our children.

But what is this malicious content? Well, content posted with intent to cause harm is not necessarily. Sometimes content showing self-harming or suicidal images is posted by the user to find help and support. Of course, this is not the case for all such content, and there are many images out there that are specifically placed to cause harm.

Regardless of its primary purpose, disturbing content should not be accessible to everyone. Some platforms argue that if someone has posted self-harming content as a cry for help, then they shouldn’t remove it. But to help one person be safe, you might be making 100,000 people less safe, and that can’t be a responsible approach.

The technology exists to flag this kind of content, so why not remove it with, for example, advanced tagging to helplines? In this way, it creates an immediate avenue of support for the struggling person and mitigates the negative impact it might otherwise have when amplified through social media shares and algorithms.

So it’s not really about freedom of speech – it’s about freedom of life. A child psychiatrist (who spoke at the inquest into my daughter Molly’s death) said she couldn’t sleep well for weeks after seeing the social media content Molly was viewing before she killed herself. My daughter was only 14 years old.

Molly is trapped by an algorithm that serves disturbing images. The coroner concluded that her death was caused by self-harm, depression and “adverse effects of online content”. However, this negative content should not necessarily be sought.

The way the algorithms work is a bit of a mystery. They are numerous and complex, and we know from Samaritan’s report that 83 percent of people who see harmful content didn’t seek it out — it was suggested to them through features like Instagram’s “exploration” and TikTok’s “for you” pages. . The report also found that 76 percent of those who viewed self-harming content online ended up harming themselves more because of it.

So we need more accountability and more transparency from social media companies. All this talk of “town squares” sounds great, but there’s a dark side to social media technology that platforms don’t want to discuss.

In fact, not long after Molly’s death, a whistleblower leaked an internal Facebook study that revealed that 13 percent of British teenagers who had reported suicidal thoughts traced their suicidal ideation back to Instagram.

So, while there’s a broader problem in terms of harm associated with social media content (filters, unrealistic beauty expectations, etc.), there’s a very specific and particularly harmful problem that could be quickly addressed if companies had the will. – or indeed a legal obligation – to do so. This is self-harming imagery and suicidal content.

About 200 school-aged children commit suicide in England every year. It’s just one too many. So while we discuss the word digital freedom as a concept, young people will watch the content, suffer and physically harm themselves.

In the meantime, as parents, carers or teachers, we can only do our best – and that’s what I’ll discuss as part of the free Now and Beyond Festival on 8 February. We should not panic and take drastic measures. I’m sure that sometimes removing a young person’s access to the internet is the right thing to do, but our main approach is to further isolate our children.

Therefore, we must not forget that our children have no sin. Chances are they didn’t even look for the content in the first place, and if they did, it would likely be a vulnerability that needed to be addressed. Our kids need to know that we won’t judge them and that they can come to us if anything online bothers them. And we all need to know how to flag such content to encourage prompt removal.

We may not be technological experts, but we can try to keep the lines of communication open between ourselves and our children or students. Until effective legislation is enacted, I hope social media companies realize they already have blood on their hands. The important concept of free speech should not be hijacked or distorted in a way that allows online harm to seek new victims.

Ian Russell is the deceased’s father Molly Russell who died in 2017. She is also a campaigner and founder of The Molly Rose Foundation. Ian, Carrie Langton (Founder of Mumsnet), Manjit Sareen (Founders, Natterhub), Ukaoma Uche (Suicide Prevention Charity Papyrus) and 16 will be hosting a free online session on digital addiction for teachers and parents/carers. Kai Leighton (Beyond youth board member) as part of the Now and Beyond Festival. To reserve your free seat, Click here. Any school or college wishing to book the wider Now and Beyond festival can register for free here

If you are feeling distressed or struggling to cope, feel free to speak to Samaritans on 116 123 (UK & ROI), email jo@samaritans.org or visit Samaritans. website to find details of your nearest branch.

If you live in the United States and you or someone you know needs mental health help right now, call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255). This is a free, confidential crisis hotline available to anyone 24 hours a day, seven days a week.

If you are in another country, you can go www.befrienders.org to find a helpline near you.



Source link