By Alexander J Martin, technology reporter
A new online regulator will fine web companies that fail to protect users, and possibly even block offending websites from being accessed within the UK, according to new government plans.
Companies that run social media platforms, file hosting sites, discussion forums, messaging services and search engines will become responsible for any harmful material which they allow their users to share or discover.
This harmful material includes that with a "clear definition" such as child sexual abuse and terrorist material, as well as material without a clear definition, such as cyber bullying and disinformation.
Although the lack of strict definitions about what "harmful material" is has prompted civil liberties campaigners to express concerns about the plan, Culture Secretary Jeremy Wright said that this would provide the new regulator with enough flexibility to tackle new harms.
Technology companies "had their chance to put their own house in order" but failed to do so, said Home Secretary Sajid Javid, launching a consultation on the plans.
Advertisement
"For too long they have failed to go far enough and fast enough to help keep our children safe. They have failed to do the right thing – for their users, for our families, and for the whole of society.
"And they have failed to show the moral leadership we expect of those trusted with the right of self-regulation."
More from Science & Tech
Mr Wright agreed: "The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough."
Teenager Molly Russell took her own life six days before her 15th birthday in 2017 after viewing self-harm and suicide material on Instagram.
Her father Ian said the images "helped kill her" and although the family's subsequent campaign forced the social media company to ban "graphic self-harm or graphic suicide related content on Instagram", a Sky News investigation found such material remained on the platform.
Sky News also found YouTube videos celebrating the New Zealand mosque shootings were easily avoiding the platform's moderation efforts, despite a general clampdown across social media platforms.
Speaking to Sky News' home editor, Jason Farrell, Mr Javid said he was shocked when Facebook told him that it wasn't able to prevent the live-streaming of the terror attack on Facebook Live.
He said he had recently stressed the matter of tackling the live-streaming of child sexual abuse, and when the technology companies said they hadn't considered how to prevent the streaming of a terror attack he realised that they were not capable of self-regulation.
The new proposals follow the government's pledge to make the UK "one of the safest places in the world to be online" after a number of scandals which have blamed harmful content on social media for causing damage offline.
The proposals on online harms, drawn up by the Home Office and Department for Digital, Culture, Media and Sport, say a regulator will be appointed to ensure companies meet their responsibilities.
The regulator would be there to monitor a new duty of care, which will give companies a legal responsibility to ensure the safety of their users.
If companies are found to have fallen short of these standards then the regulator could fine the company a "substantial" amount, block the sites from being accessed within the UK, or even make individual members of senior management legally liable.
The costs for the new regulator are unclear, although the government said it hopes that its funding would come from the technology sector itself and not the public purse.
"To recoup the set-up costs and ongoing running costs, the government is considering fees, charges or a levy on companies whose services are in scope," stated the white paper.
"This could fRead More – Source
[contf] [contfnew]
Sky News
[contfnewc] [contfnewc]