Thursday, October 15, 2020

Twitter Drops the Big One

I honestly did not want to write another post about social media moderation/counter-disinfo for a while. Especially about Twitter moderation. But, as Jeff Goldblum once said, nature finds a way. So here we are. Within barely a day, Twitter has crossed a rubicon in one of the most (characteristically) incoherent and self-defeating ways possible. There are certainly precedents for what it has just done, but none of them reflect very well on the decision it has just made. A decision that has far-reaching consequences and one that the company is already trying to do damage control over. So what happened? Why should you care? I will, ahead of time, plead for readers’ patience and understanding because trying to make sense of this bizarre and convoluted fiasco involves the impossible task of making it more coherent to read than it actually is in reality.

Let’s make this part as short as possible. The New York Post reported a flimsy story about Hunter Biden which Twitter promptly URL-blocked.

The Post story made unconfirmed allegations of influence peddling by Hunter Biden, the son of Democratic presidential candidate Joe Biden, with a Ukrainian energy company. The story was based on purported emails from Hunter Biden that were retrieved from a laptop said to have been abandoned at a repair facility and then reportedly supplied to President Donald Trump’s attorney, Rudy Giuliani, who gave them to the Post. Skeptics noted that the Ukrainian energy company in question, Burisma, had been previously hacked by the Russian military, and raised concerns that the emails the Post article was based on could have been the result of a Russian disinformation effort… Earlier in the day, Twitter said the Post report violated its policy on hacked material. Twitter users who attempted to paste the article’s URL into a tweet instead saw a message reading: “We can’t complete this request because this link has been identified by Twitter or our partners as being potentially harmful.”

Twitter did not let you share it publicly. It did not even let you share it via direct messages privately to other users. And, to boot, Twitter locked the New York Post’s account as well as the accounts of political figures, journalists, and various prominent people or entities that tweeted links to the story or screenshots from it. When I tried to share the Post story just to test to see if it was blocked, I received a message with language usually reserved for malware-infested sites or illegal pornography: “we can’t complete this request because this link has been identified by Twitter or our partners as being potentially harmful.” Other users, as like MIT professor Dean Eckles, received similar false explanations when trying to send the article via direct message: “this request looks like it might be automated. To protect our users from spam and other malicious activity, we can’t complete this action right now.” Eckles observed that Twitter was literally using misinformation to fight misinformation: “it is absurd that Twitter’s way of dealing with the NYPost story involves them literally sending misinformation to anyone who ties to send it. This notice, when trying to send the article as a direct message, is a lie.”

This morning, I tried to share this article (released today) to test and got the same response as yesterday. The House Judiciary Committee Republicans are upping the ante by posting the full text of a Post article on a government website. And as various GOP-affiliated entities attempted to test Twitter en masse, many of them were either blocked outright or serviced with content warnings. In a follow-up comment, Twitter declared that the story contained personally identifying information and violated dictates against the spreading of hacked material. Following intense and predictably partisan-coded outrage, Twitter CEO Jack Dorsey backpedaled and undercut his own company’s moderation team by declaring that URL-blocking was not “acceptable” without proper communication and context. But what context could there be? The problem is not poorly communicating a sensible standard, but the issue of whether the standard is sensible to begin with. Twitter needs to have a damned good reason to censor people from even sharing links to a major news story in private messages. Does it?

The New York Post story itself, as of the writing of this post, looks flimsy and irresponsible at best and complicit in a targeted disinformation operation at worst. The story is about materials found on a laptop hard drive that the Post claims was “dropped off at a repair shop in Biden’s home state of Delaware in April 2019.” Fishy? Almost certainly. Security blogger Krypt3ia’s comments sum the whole thing up quite well.

This story is riddled with holes and innuendo but, may have some kernels of truth. But all a good disinformation warrior needs to carry out a disinformation campaign, is that Russian formula of 80/20 disinformation to real information, so this story certainly fits that model. …So, yeah, a laptop of uncertain provenance, in the hands of an anonymous computer repair guy, say’s he found incriminating data on the hard drive, and it was subsequently taken by the FBI. Of course the laptop, who brought it in, and who it belonged to are all quite unknown as the anonymous computer guy fails to give any details such as he should have, ya know, like a receipt or a write up of who it belonged to and at least the number he tried to call right? …Lastly, let me just say this, all of this story screams no chain of custody, and a large probability of tampering, hacking, disinformation creation and propagation by forces yet to be seen.

This poses some problems, to put it mildly, for Twitter’s case for its actions. If we narrowly interpret this to only mean “distribution of hacked materials” as per the wording of the 2018 Twitter elections policy, the New York Post could potentially benefit from an exception Twitter carves out for “reporting on a hack, or sharing press coverage of hacking.” Twitter chose not to utilize this exception because of the presence of private information inside the materials (which the Post did not obfuscate). But was, the material, in fact, actually hacked? Twitter doesn’t even know if the material is genuine to begin with! Twitter is implicitly claiming that the information obtained by the Post was “hacked” despite – as Krypt3ia notes – the data’s provenance and validity itself being unknown. Does Twitter know something we don’t in using anti-hacking stipulations to justify blocking the URL?

What does Twitter know about how the materials in the laptop were obtained and their validity? Is it invoking a precaution ahead of time in blocking material that could be hacked? If so, Twitter has not bothered to say anything about it. For all we know, it could have been illegally obtained somehow. But this is hard to know when none of the materials in the story have been, as of the writing of this post, independently verified by someone other than the (journalistic) hacks at the Post and their confederates. Twitter knows absolutely nothing about the nature and origin of the materials found inside the laptop. Which it is URL-blocking because, supposedly, they are hacked. Again, if the material need only could be hacked in order to justify throttling the links Twitter might justifiably invoke moderation protocols, but Twitter has offered no such justification as of the writing of this post. As of the writing of this post, significant uncertainty still exists as to the nature of the material in the hard drive and how it was obtained.

So Twitter’s decision can be potentially read as a de facto prohibition on news stories that utilize unauthorized private materials as primary sources. Or, in this case, anything that pre-emptively appears to be unauthorized private materials even if they very well could turn out to be forgeries. Twitter is risking the collapse of a basic distinction between breaking into the Watergate to extract dirt and finding dirt in misplaced files from the Watergate. But that is not really the only issue. Twitter justified URL-blocking the Post story because it contained unauthorized private information. But if the standard is “distributing content obtained without authorization,” especially “illegally obtained materials” with private information, many other stories might have gotten caught up in the dragnet if the standard had been retroactively applied. Certainly all of Wikileaks’ dumps as well as the Panama Papers, the Snowden leaks, the 2016 DNC leaks, and even things stretching as far back to the 1971 Pentagon Papers. This is just a tiny sampling.

Leaked material, much of which has ambiguous at best provenance, is the bread and butter of mainstream journalism. And Twitter just potentially made all of it subject to software sanction. Note that the New York Post excerpted the leaked material rather than releasing it in full, which potentially opens up many more prestigious journalistic outlets (such as the New York Times, Washington Post, and Guardian) to the same restriction. Is this a standard that can be applied even remotely uniformly in an age of news stories driven by hacked and leaked materials? I, for one, see the Snowden and Wikileaks leaks as illegitimate in the same way many observers see the DNC hacks/leaks as illegitimate. Yet there is almost certainly far more support for the former than the latter. Of course, we are not in a universe in which we can test retroactive application of a policy that – like everything in social media moderation – is being largely improvised on the fly.

To be sure, it is possible to see this as an significant overcorrection to Twitter’s ongoing problem of shielding high-profile public/institutional accounts from moderation. High-profile politicians like, say, President Donald J. Trump are often given flexibility in cases in which ordinary users are usually harshly punished. So one could see the New York Post fiasco as simply being an heavily flawed attempt to rectify this problem. If this were all that was at stake, I would have much greater sympathy for Twitter and tolerance for errors and inconsistencies. Because moderation of this sort is unusually hard!

Consider the following scenario to discern why. Two college students – a 18-year old woman from Taiwan and a 20-year old man from mainland China – are dating. The relationship is not going well. The man threatens to “wipe her out” if she leaves him. The threat is posted on social media. The case looks relatively straightforward. Now make the 18-year old Taiwanese woman 64-year old Taiwanese President Tsai Ing-Wen and make the Chinese counterpart the government mouthpiece Global Times. Last month, the Global Times posted the following:

Voice:Taiwan leader Tsai Ing-wen, who pledged deeper ties with the US at a dinner for a visiting senior State Department official,is clearly playing with fire. If any act of her provocation violates the Anti-Secession Law of China, a war will be set off and Tsai will be wiped out.

The tweet, in case readers thought it too subtle, contained a photograph of Tsai alongside the bellicose text. Given that any war involving Taiwan and mainland China will likely begin with an attempt to kill Tsai Ing-Wen and other senior leadership, “wipe out” should not be taken to be a mistranslation or a literary term of art. It means what you think it does. And Tsai Ing-Wen has a Twitter account. The tweet is, as of the writing of this post, still up. Why? Because Twitter has a significant problem applying rules and interaction norms initially developed for interpersonal conflicts between users to institutional actors (such as loosely state-controlled press entities like the Global Times) and/or individuals representing institutions (like President Tsai Ing-Wen). So a threat from one user (acting as a proxy of a government) against another individual user (the President of another government) is difficult for Twitter.

Like it or not, conditional threats of physical harm are a part of international politics. During the Gulf War, for one, the United States threatened to execute a nuclear strike and “eliminate” Iraqi leadership if Baghdad deployed chemical weapons. And if Tsai had some means of physically threatening the personal safety of Chinese leader Xi Xingping to deter Chinese invasion of Taiwan, she would absolutely be remiss in not using all of the means at her disposal – Twitter included – to publicly communicate that he would be harmed if Beijing dared to put a single rifleman on Taiwanese beaches. Twitter is banned in China, but Chinese diplomats have accounts. And if Xi operated an account and Tsai made a vague but nonetheless firm threat to physically harm him if war should break out, should she be penalized?

Understandably, Twitter has a severe problem in navigating these public/private boundaries. The service, like many others, cannot decide if individual users are the most important stakeholders or merely the proverbial grass that gets trampled in fights between the elephants. Twitter can give itself wiggle room by carving out exceptions for the public interest or national/international politics, but the problem never goes away. And, regardless, Twitter’s very design probably prevents any decision from acquiring true legitimacy among a diverse userbase. This makes the already intractable problem of content moderation even more tenacious. So I ordinarily would have some sympathy. Yes, Twitter makes mistakes but error is inevitable. We have to look at the distribution of error rather than error itself.

Instead, what is actually most risible about the story is the combination of a vague and uneven explanation with the subversion of anti-abuse/anti-spam tooling. As Jon Stokes noted earlier:

Ultimately, network theory is leading people astray, I think. Modern-day 2020 information control – including outright deleting unpopular ideas from megaplatforms – is not about DNS or hyperlinks or any kind of graph. None of the info control uses that apparatus because that stuff is outmoded if you’re trying to control what people can & cannot see. Instead, every major platform has a sophisticated anti-malware/SPAM system – a kind of platform immune system – and THAT is what’s being brought to bear on lawful speech to remove it. Free speech vs. censorship in 2020 does not look like “it was deleted over here, so I republished over there & avoided a chokepoint”. No, it looks like “I attempted to publish it, & then the mod tools + anti-SPAM/malware tools nuked it instantly, & I can’t even link it.”

Put differently, conversation about moderation and content filtering has gotten bogged down into the miasma of “freedom of speech vs freedom of reach.” The debate is misleadingly framed about whether speech deserves viral attention. But, as Stokes points out, this framing ignores the underlying issue that platforms have sophisticated anti-abuse tools that can instantly and pervasively block content from even being shared to begin with. In the span of a few hours, Twitter escalated its usage of said tools to the blockage of news stories from a major American daily metropolitan newspaper. In a world where a few companies are becoming de facto content cartels, the creep of anti-abuse/anti-spam toolsets into the primary means of exercising moderation controls should concern anyone that cares about speech online. Eckles’ experience earlier – of trying to perform a seemingly innocuous action and then being blatantly to lied about why the service did not allow it – defined the overall user experience during the initiation of the URL blocking.

And as a result the actual function of the anti-abuse/anti-spam tooling is now degraded. I wish I could say I was surprised. I was shocked and angered. But not surprised. In truth, all of this followed a basic precedent set by Twitter with the now infamous “blue check” that denotes a verified account. Blue checks were originally intended as security measures. But because Twitter mostly awarded blue checks to celebrities and other people of “public interest,” its utility as a verification measure was always limited. Then Twitter revoked blue checks after submitting to public outrage over atrocious user behavior, making the blue check a de facto status symbol. Today, whatever Twitter’s intentions, having a blue check does not really serve the purpose of authenticity and verification.

Allowing critical platform functions to lose their original purpose and justification via (primarily reactive) mismanagement is a Twitter hallmark. So now, if Twitter really needs users to take seriously a warning that, say, a given link goes to a legitimately “harmful” website it now needs to work twice as hard. This is the cost of, well, abuse of anti-abuse features! To be sure, I doubt that this individual moderation case will make even next week’s news cycle. We’re in the middle of a pandemic (part of a generalized omni-crisis) and about to enter into what possibly could be a disputed election. There are plenty of scandals and outrages, past and present, to get worked up about. It appears doubtful that this story, or Twitter’s stifling of it, will have much or any impact on the race. But Twitter’s irresponsible decision confirms the worst of its critics’ fears and inflicts a dramatic and totally self-inflicted injury. Twitter has escalated to the usage of an extremely powerful moderation tool without a shred of coherent justification.

Very little of this is going into how pundits are discussing the story as it develops. For Steve Coll, for example, the press has “passed” the test that it “failed” four years ago. Coll cited the blocking of the New York Post story as a part of this success. But Twitter – and other social media platforms – will desperately need the trust of users as it is barraged with rumors, misinformation, and deliberate info ops. But this is trust, that, sadly, Twitter is squandering very rapidly with blunders like this. Trust among whom, though? People already invested in the Presidential campaign of former Vice President Joseph Biden and whom are generally center-left/left-leaning will likely see little or nothing wrong with what Twitter has just done. Growing pains and unfortunate mistakes, yes. But nothing more than that, right? These are not the people whom the platform really needs to convince.

Instead, the people who Twitter most needs legitimacy from are people who are either hostile, skeptical, ambivalent, or indifferent. Not just to Biden or liberal/progressive values, but quite frankly to the entire moderation/counter-disinfo enterprise altogether. It has not given them even scraps. They are not the only ones who should be justifiably upset. Bruno Maçães summed the problem up well: “[Jack Dorsey] admits twitter cannot explain its decisions. But on issues of fundamental importance for democracy making a decision and explaining it are exactly the same. You cannot decide if you can’t explain.” The inability to explain is a function of the inability to make justifiable decisions. In my own personal opinion (I am, for readers that cannot already tell, virulently anti-Trump), what Twitter has just done makes any subsequent major moderation decisions deserving of default distrust.

This is not the first time Twitter has abused low-level anti-abuse/anti-spam tooling in the service of a fundamentally dubious decision it has failed to properly explain. But in censoring a link from a legitimate news publication (albeit a tabloid-ish one that lacks prestige relative to others), it has seriously crossed a line in ways that make comparisons to Chinese filtering inevitable even if strained. It is hard not to see almost all of the various pathologies of contemporary moderation/counter-disinfo in this one case. When the mis/disinfo is being spread from a top-down source (such as a newspaper) rather than a bottom-up source like a Macedonian teenager, moderation becomes incoherent. And when the pressure to do something becomes overwhelming, platforms increasingly revert to rule-by-law rather than rule-of-law. Unfortunately in due time it is likely that something else makes this fiasco look like a tempest in a teapot.

Unlike some of my friends in the “fringe weirdo” set, I believe there is a legitimate purpose for counter-disinfo. But either the object of contestation (platforms) needs to change or the methods of contestation do. Because this is simply unsustainable. If I were to pick the singular thing in this fiasco that angered me the most, it is the abuse of anti-abuse/anti-spam tooling. Twitter indirectly and directly deceived users, Eckles being an exemplar, about why they were being restricted from sharing content. This is redolent of classic Bad Old Microsoft dark patterns-esque behavior. The problem is not just that Twitter cannot explain its decision-making. It is that, four years after 2016 hack and leak operations, it has no way to deal with these kinds of situations besides deceiving users and treating them like children, weakening critical anti-abuse/anti-spam features in the process. This is shameful and inexecusable.



from Hacker News https://ift.tt/356zZqx

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.