top of page

Britain May Toughen Internet Safety Laws After Riots and Musk's Comments

By Chethana Janith, Jadetimes News

 
Jadetimes, Britain May Toughen Internet Safety Laws After Riots and Musk's Comments.
Image Source : (Alamy / X)

LONDON, Prime Minister Keir Starmer’s Labour government is considering ways to toughen up internet safety regulations in the U.K. after misinformation sparked a spate of anti-immigration protests and X owner Elon Musk made incendiary comments in posts that were viewed by millions of people.


Two industry sources with knowledge of the matter indicated that following the events of the past two weeks, Labour is considering a review of the Online Safety Act, legislation that requires tech giants to prevent the spread of illegal and harmful content on their platforms.


These sources were not authorized to speak publicly about the proposed changes, as the conversations surrounding revamped online safety laws are ongoing.


Top officials have made comments in recent days saying that the government may review the Online Safety Act to make it tougher on disinformation, hate speech and incitement to violence.


"There are obviously aspects of the Online Safety Act that haven’t come into effect yet. We stand ready to make changes if necessary,” Nick Thomas-Symonds, minister for the Cabinet Office, said.


Media and telecommunications regulator, Ofcom, has been unable to act against social media platforms for allowing hate speech and other content that would violate the law, because of the fact that the legislation hasn’t fully come into force yet.


What is the Online Safety Act, exactly? And what could it mean for tech firms like Elon Musk’s X? Jadetimes runs through all you need to know.


What is the Online Safety Act?


The Online Safety Act is a landmark piece of legislation in the U.K. that seeks to force social networks and video streaming media companies to rid their platforms of illegal content.


The regulation contains new duties which would require tech companies to actively identify, mitigate and manage the risks of harm from such material appearing on their platforms.


There are several examples of content that, if reported, could make a company liable for criminal sanctions. These include child sexual abuse, fraud, racially or religiously aggravated offenses, incitement to violence, and terrorism.


Once the rules take effect, Ofcom would have the power to levy fines of as much as 10% of companies’ global annual revenues for breaches. In cases where repeat breaches occur, individual senior managers could even face jail time.


Ofcom has said the new duties on tech firms won’t fully come into force until 2025, once it’s finished consulting on codes of conduct for the companies.


Why are there calls for the law to change?


Two weeks ago, a 17-year-old knifeman attacked several children attending a Taylor Swift-themed dance class in the English town of Southport in Merseyside. Three girls were killed in the attack.


Shortly after the attack, social media users were quick to falsely identify the perpetrator as an asylum seeker who arrived in the U.K. by boat in 2023.


Posts on X sharing the fake name of the perpetrator were actively shared and were viewed by millions. That in turn helped spark far-right, anti-immigration protests, which subsequently descended into violence, with shops and mosques being attacked and bricks and petrol bombs being hurled.


As the riots raged on, Musk, who owns X, began making comments about the situation in the U.K. He suggested the riots could end up resulting in a civil war, saying in an X post: “Civil war is inevitable.” His comments have been condemned by the U.K. government.


When questioned during a press briefing about Musk’s remarks, the official spokesperson for Prime Minister Keir Starmer said that there was “no justification” for such statements.


Musk also shared an image of a fake headline that was made to look like it had come from “The Telegraph” newspaper’s website, falsely claiming the U.K. was building “detainment camps” on the Falkland Islands for rioters. He has since deleted it.


These events have sparked calls for the government to revisit the Online Safety Act to ensure it is implemented faster and that there are provisions to ensure it is more effective to prevent such events from happening in future.


How could the law change?


So far, it is not yet clear how - or even when - the Online Safety Act will be revisited. One industry source mentioned that the government is "trying to work out what has happened over the last few days and is focused on the response."


“I don’t think there is much policy thinking has been done yet here,” the source added.


New measures on disinformation are likely to be looked at, among a few other options - however, the government hasn’t come to any “concrete views” on how the legislation should change yet.


A second industry source suggested that the government is likely to review the legislation only once it is in force, likely in spring 2025. "I think this is a way of sounding tough but putting off a difficult decision," they said. "It's by no means an easy fix. It's incredibly hard to do."


The Department for Science, Innovation and Technology, responsible for overseeing online safety regulations, was not immediately available for comment when contacted on Wednesday.


It’s also worth noting that Labour had already committed to toughening the Online Safety Act in its election manifesto. Proponents for a review say the act needs to be stricter on social media platforms to ensure they implement a robust response to misinformation, hate speech, and incitement to violence.


“I think what the government should do very quickly is check if it is fit for purpose. I think it’s not fit for purpose,” Mayor of London Sadiq Khan told the Guardian newspaper last week.


Joe Ondrak, research and tech lead for the U.K. at tech company Logically, mentioned that there are aspects of the Online Safety Act that address disinformation, but they’re far from perfect.


While the law "does have some very specific provisions about certain types of disinformation in it," including disinformation spread by foreign state actors, it "doesn't cover really comprehensively domestic disinformation," Ondrak said.

More News

bottom of page