Speech Police Are Coming To Social Media

The debate about what one can say online is once again heating up globally. Twitter, the favorite network of politicians and the press, is under mercurial new management from Mr Musk, a self-proclaimed free-speech autocrat who has restored the accounts of previously banned users such as Mr Trump. Meta, a larger rival, is reportedly preparing its own text-based network for launch this summer. Social-media platforms face a test over the next 18 months as the US presidential election approaches, one of the world’s great festivals of online bile and misinformation.

Politicians and judges step into the fray with regulatory proposals. With Congress deadlocked, America’s state legislatures and its courts are drawing new lines around the limits of speech. In Europe, legislatures have gone further. These moves are prompting governments in less democratic parts of the world to write their own new rules. What can be said and heard online is being scrutinised.

Monitoring the online public square is a difficult task. Although things have calmed down a bit since Mr. Trump left office and died of Covid-19 (along with a wave of misinformation), last year three of the world’s largest social media platforms—Facebook and Instagram, owned by Meta 11.4bn posts, videos and user comments removed or blocked by Google—owned by Wale and YouTube. Automated filters eliminate most of this, but Meta and Google also employ more than 40,000 content reviewers between them.

(Graphic: The Economist)

View Full Image

(Graphic: The Economist)

Much of this housekeeping isn’t controversial: 90% of the posts Facebook, the largest network, removed last year were simply spam. But many of the rest of the moderation decisions are tough (see chart). In the latest quarter, Facebook removed or blocked 10.7m posts it deemed hate speech and 6.9m bullying, both concepts where there is room for disagreement. A recent quandary, META recently ordered a review into whether it has been overzealous in policing Arabic words. Martyrwhich is usually translated as “martyr”, but whose meaning can change in different contexts. Platforms have mostly been left to figure out these kinds of problems.

Now, though, politicians are intervening. In the US, Democrats and Republicans agree that social networks are doing a poor job of moderation, and that it is time to replace Section 230 of the Communications Decency Act, which shields online platforms from liability for content. Posted by users (with exceptions such as content related to sex trafficking). But they completely disagree on what to do about it.

Democrats, who accuse tech billionaires of fomenting anger and misinformation for clicks, want platforms to remove more content. Republicans, who think California’s busybody is bugging conservatives, want him to withdraw less. (According to the Pew Research Center, a US think-tank, voters are suspicious of tech firms supporting a liberal approach over conservative ones by a ratio of three to one.) The result is a congressional impasse.

The Supreme Court had a chance to tinker with Article 230. But on May 18, in a judgment on two similar cases involving YouTube and Twitter, which hosted content uploaded by terrorists, it refused to change the status quo, rejecting the idea that online platforms are responsible for the actions taken by their users. responsible for the crimes committed. Tech lobby group NetChoice described the decision as a “huge victory for free speech on the Internet”. Article 230 is safe for now.

With no luck at the federal level, reformers on the left and right are focusing on the states. Last year California passed a law forcing tech companies to collect less data from children, among other things. Several states have passed or proposed laws requiring under 18s to obtain their parents’ permission before using social media. On May 17, Montana banned TikTok outright over its Chinese ownership (TikTok is suing and hopes to win).

Most controversially, in 2021 Florida and Texas, both Republican-controlled, passed laws restricting social networks’ ability to moderate political speech. Courts have upheld the Texas law and struck down the Florida law, setting the stage for a return to the Supreme Court, which is expected to take up the cases later this year. “If the court opens the door to regulation in this space, many [states] would jump at the opportunity,” says Evelyn Douk of Stanford University.

They’ll have two models over the Atlantic to follow. The EU’s Digital Services Act (DSA), passed in July 2022, will come into force next year. The UK Online Safety Bill, four years in the making, is expected to be enacted later this year. Both take a different approach from America. Rather than changing who has liability for online content (the question at the heart of the Section 230 debate), they force platforms to do a kind of due diligence to keep bad content to a minimum.

Europe’s DSAs require online platforms to set up complaint-handling procedures and demand that they tell users how their algorithms work, allowing them to change the recommendations they receive. Smaller platforms, defined as those with fewer than 45 million users in the EU, will drop some of these obligations to prevent them from drowning in red tape (some larger rivals have warned it could make them havens for harmful content). For people large enough to qualify for a full inspection, the DSA represents “a significant financial burden,” says Florian Reiling of Clifford Chance, a law firm. Deep checking, only one—Zalando, a German e-commerce site—is European.

Twitter, which has cut its staff by about 80% since Mr Musk took over in October, could be among those that struggle to meet the DSA’s requirements. Mr Musk appears to have taken over from Meta boss Mark Zuckerberg as social media’s biggest villain in the eyes of some. On 26 May, Thierry Breton, a European commissioner, tweeted that Twitter had abandoned the EU’s voluntary code of conduct against misinformation. He said, “You can run but you can’t hide.”

The UK’s parallel legislative effort is shaping up to be more far-reaching. The online safety bill was conceived in 2019 following the suicide of a 14-year-old boy who had consumed algorithmically recommended depressive material. After four prime ministers, the text of the bill has almost doubled in length. One US tech firm dubbed it “one of the most complex bills we have faced anywhere in the world”.

It goes further than the EU in its loosely worded requirement for platforms to actively screen content. Large social networks already scrutinize videos containing known child-abuse material. But subtle crimes, such as incitement to violence, are difficult to detect automatically. The scale of some platforms—YouTube uploads 500 hours of video per minute—means that strict requirements to pre-screen content can reduce the amount of new content uploaded.

As in the US, British conservatives are concerned about the over-restraint of right-wing views. The Bill therefore imposes a duty to ensure that moderation”.[applies] Likewise for a wide variety of political opinion. Similarly, the European Union has promised “protections for the media against the removal of inappropriate online content” as part of its upcoming European Media Freedom Act, a response to a crackdown on the press in member states such as Hungary and Poland. .

The most controversial part of the UK bill, a requirement that platforms identify “legal but harmful” content (for example, content that encourages eating disorders) has been removed in relation to adults. But there is a duty to limit its availability to children, which in turn implies the need for extensive age screening. Tech firms say they can guess users’ ages from things like their search history and mouse movements, but a strict duty to verify a user’s age would threaten anonymity.

Some suspect that their real objection is the price. “I don’t think ‘it costs money and it’s hard’ is an excuse,” says Kelly Blair, chief operating officer of OnlyFans, a porn-focused platform that checks the ages of its users and doesn’t see that Don’t do what others want. Yet some platforms are adamant: The Wikimedia Foundation, which runs Wikipedia, says it has no intention of verifying users’ ages.

The stricter the regulations in jurisdictions such as the UK or the European Union, the more likely tech companies will offer different services there, rather than applying uniform regulations worldwide. Might as well leave some. WhatsApp, a Meta-owned messaging app, says it is unwilling to break its end-to-end encryption to meet a requirement in a UK bill that companies scan private messages for child abuse. Does It can’t: the Bill would only let Ofcom demand data in cases where it was determined that such a measure was proportionate. Nevertheless, threats of caning are becoming more common. On 24 May Sam Altman, head of OpenAI, said he would consider leaving the European Union if regulation of artificial intelligence went too far (he later retracted his comment).

network effect

Whether or not Britain or the EU make full use of their new powers, they set an easy precedent for countries that can use them freely. The UK bill, which proposes prison for executives of companies that break the rules, is “a blueprint for repression around the world”, according to the Electronic Frontier Foundation, a civil-liberties group. “Pioneering” in technology is only true in the sense that it is making a mark for non-democracies to pass repressive laws of their own.

All they need is a little encouragement. Turkey ordered Twitter to censor information during the recent election; Mr Musk, a free speech enthusiast, complied. Brazil has proposed a “fake news law” that would punish social networks for failing to identify misinformation. Based on European legislation, it has been dubbed the “DSA of the Tropics”. India is to publish an internet-regulation bill in June that will reportedly make platforms liable for users’ content if they do not agree to identify and trace those users when so directed.

The international impact of British and EU proposals is informing debate in the US. “No matter how much you think social networks are corrupting American politics,” says Matthew Prince, head of Cloudflare, an American networking firm, “they are incredibly destabilizing to other regimes that are inimical to the interests of the United States.” are against. ,

© 2023, The Economist Newspaper Limited. All rights reserved. From The Economist, published under license. Original content can be found at www.economist.com

catch all technology news And updates on Live Mint. download mint news app to receive daily market update & Live business News,

More
Less

Updated: May 30, 2023, 12:21 PM IST