CEO Melanie Dawes highlights intent to enforce strongly the Online Safety Act which comes into force in 2025. Not everyone is convinced.
In an interview with the Financial Times [subscription needed], the head of the UK’s telecoms and media regulator has said she is ready to take “strong action” if Big Tech doesn’t comply with a law that comes into force next year. Melanie Dawes, CEO of Ofcom, said in the interview that the Online Safety Act addresses many of the issues that contributed to rioting across the country in August.
Violence spread after the murder of three little girls in Southport, in the north-west of England. It was falsely stated on X that the murders had been carried out by a named, Muslim asylum seeker and that the authorities were hiding the truth. Government officials have also claimed that X did not cooperate with requests to take down inflammatory and false material.
The owner of the X, Elon Musk, has criticised the UK for obstructing freedom of speech and got into a spat with the UK’s Prime Minister, Sir Kier Starmer.
Dawes told the FT that the law is the first of its kind anywhere in the world and will require websites to set and enforce clear content moderation policies and remove illegal content. Ofcom will have the power to fine companies that violate the Act and even close them down, although a number of commentators have voiced doubts about a government agency having the wherewithal to take on some of the richest and most powerful individuals and companies in the world.
Not comprehensive enough?
In addition, the Act is woolly. The Online Safety Act can prosecute anyone sending messages known to be false with the intention of causing “non-trivial psychological or physical harm”, and in the wake of the riots a number of people were prosecuted for this offence.
Confusingly, on the other hand, the Act does not give authorities the power to deal with “legal but harmful” content. Dawes said it was up to Parliament to decide if it should be illegal to spread lies. She also said it could sometimes be difficult to decide where someone had made a mistake rather than deliberating promoting something they knew to be untrue. Well quite, but that’s why we have courts? To decide such matters?
Also, surely the whole point is to stop people posting stuff without knowing whether or not it is true? And also to be made aware there will be consequences for doing so?
Yesterday Abdul Hai who was acquitted of murdering the teenager Richard Everitt in 1994, told The Guardian that he is considering legal action against X after Tommy Robinson, a far-right agitator, posted that Hai had been convicted of the crime. Interesting to see what, if anything, comes of that and why it should be down to an individual to take on X for promoting an apparently straightforward lie.