Driving Value Added Services & Content|Billing & Engagement In Motion|Minutes, Messages & Traffic That Pays|Engage & Commercialize Connected Consumers|Making Interactive Media Pay|Billing & Alternative Payments That Convert|Mobile Strategies For Merchants & Content Owners|Monetising Premium Content & Services
Golden Goose
MediaXO Header Ad
Evina Header Banner Ad
Cookies Digital Header Feb 2023 Ad
Digital Select Ltd

UK joins Germany is officially regulating social media as Ofcom gets new powers

0

Ofcom has been given the power to force social media companies to act over harmful and offensive content, ending the years of self-regulation. The move comes after wide-spread calls for better regulation of content, especially after the death of Molly Russell who took her own life after viewing graphic content on Instagram.

The new rules will apply to firms hosting user-generated content, including comments, forums and video-sharing and will include Facebook, Snapchat, Twitter, YouTube and TikTok.

The government says that it will be “setting the direction of the policy”, but that Ofcom has the freedom to draw up and adapt the details. By doing this, the watchdog should have the ability to tackle new online threats as they emerge without the need for further legislation.

Ofcom will have the power to make tech firms responsible for protecting people from harmful content such as violence, terrorism, cyber-bullying and child abuse – and platforms will need to ensure that content is removed quickly. They will also be expected to “minimise the risks” of it appearing at all.

What sanctions they will face if they fail has not been revealed.

The regulator has also announced the appointment of a new chief executive, Dame Melanie Dawes, who will take up the role in March.

“There are many platforms who ideally would not have wanted regulation, but I think that’s changing,” Digital Secretary Baroness Nicky Morgan, told the BBC. “I think they understand now that actually regulation is coming.”

Julian Knight, chair elect of the Digital, Culture, Media and Sport Committee, which scrutinises social media companies, called for “a muscular approach” to regulation.

“That means more than a hefty fine; it means having the clout to disrupt the activities of businesses that fail to comply, and ultimately, the threat of a prison sentence for breaking the law,” he says.

In a statement, Facebook said it had “long called” for new regulation, and said it was “looking forward to carrying on the discussion” with the government and wider industry.

However, Tim Ensor, Head of Artificial Intelligence at Cambridge Consultants – which produced a report for Ofcom on the role of AI in content moderation – warns that complex technical challenges must be overcome to stem the flow of harmful user generated content on social media platforms.

“It’s excellent that the UK, through Ofcom, is taking the lead on this global issue. Current systems to protect us online simply don’t scale and are at breaking point,” he says. “Moderation is mostly handled by humans who understand context, but it’s impossible to keep up with the sheer mass of content. What’s needed is a new kind of human/AI collaboration – a key finding of our Ofcom-commissioned report into the subject. This is the technical challenge facing Ofcom and social media platforms that don’t have the resources of the biggest tech giants like Facebook. Only a combination of human moderation and some very advanced AI systems can hope to protect the public, and the moderators themselves, from harm.”

In many countries, social media platforms are permitted to regulate themselves, as long as they adhere to local laws on illegal material.

Germany introduced the NetzDG Law in 2018, which states that social media platforms with more than two million registered German users have to review and remove illegal content within 24 hours of being posted or face fines of up to €50m (£42m).

Australia passed the Sharing of Abhorrent Violent Material Act in April 2019, introducing criminal penalties for social media companies, possible jail sentences for tech executives for up to three years and financial penalties worth up to 10% of a company’s global turnover.

China blocks many western tech giants including Twitter, Google and Facebook, and the state monitors Chinese social apps for politically sensitive content.

Share.

Leave A Reply