LONDON, Nov 28 (Reuters) - Britain will not force tech giants to remove content that is "legal but harmful" from their platforms after campaigners and lawmakers raised concerns that the move could curtail free speech, the government said on Monday.
Online safety laws would instead focus on the protection of children and on ensuring companies removed content that was illegal or prohibited in their terms of service, it said, adding that it would not specify what legal content should be censored.
Platform owners, such as Facebook-owner Meta and Twitter, would be banned from removing or restricting user-generated content, or suspending or banning users, where there is no breach of their terms of service or the law, it said.
The government had previously said social media companies could be fined up to 10% of turnover or 18 million pounds ($22 million) if they failed to stamp out harmful content such as abuse even if it fell below the criminal threshold, while senior managers could also face criminal action.
The proposed legislation, which had already been beset by delays and rows before the latest version, would remove state influence on how private companies managed legal speech, the government said.
It would also avoid the risk of platforms taking down legitimate posts to avoid sanctions.
Digital Secretary Michelle Donelan said she aimed to stop unregulated social media platforms damaging children.
"I will bring a strengthened Online Safety Bill back to Parliament which will allow parents to see and act on the dangers sites pose to young people," she said. "It is also freed from any threat that tech firms or future governments could use the laws as a licence to censor legitimate views."
Britain, like the European Union and other countries, has been grappling with the problem of legislating to protect users, and in particular children, from harmful user-generated content on social media platforms without damaging free speech.
The revised Online Safety Bill, which returns to parliament next month, puts the onus on tech companies to take down material in breach of their own terms of service and to enforce their user age limits to stop children circumventing authentication methods, the government said.
If users were likely to encounter controversial content such as the glorification of eating disorders, racism, anti-Semitism or misogyny not meeting the criminal threshold, the platform would have to offer tools to help adult users avoid it, it said.
Only if platforms failed to uphold their own rules or remove criminal content could a fine of up to 10% of annual turnover apply.
Britain said late on Saturday that a new criminal offence of assisting or encouraging self-harm online would be included in the bill.
($1 = 0.8317 pounds)
Reporting by Paul Sandle; Editing by Alex Richardson
Our Standards: The Thomson Reuters Trust Principles.
from Hacker News https://ift.tt/SWpDfgK
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.