Tuesday, January 7, 2020

Singapore must look beyond online falsehood laws as elections loom


There is a new standing joke amongst my friends. Whenever any of us attempts to criticise a government decision or policy, words of caution would be quickly dished out: "Oooh, be careful or you'll be POFMA-ed." 

It's perhaps indicative of how a piece of legislative initially established as a way to safeguard national security has, so far, been tapped for lesser reasons, when it really should be tested against bigger threats. This is especially urgent as Singapore prepares for its first elections since the emergence of new technology, such as deepfake, as well as evidence of increased online interference. 

POFMA, or short for Protection from Online Falsehoods and Manipulation Act, quickly became a buzzword after it was used in quick succession soon after it came into effect in October. The Bill had passed after a quick public debate and amidst strong criticism that it gave the government far-reaching powers over online communication and would be used to stifle free speech as well as quell political opponents.

The government had publicly assured the Bill was not intended to stifle free speech, debate, and criticism, but observers noted that the legislation itself did not provide such assurances that limited how it could be used and contained "broadly worded clauses" when defining what was deemed as a false statement and what constituted public interest. 

Soon after it kicked in, within a month, the legislation was used to issue five correction directions--four of which targeted politicians or political platforms that opposed the government. The fifth was issued to Facebook after the author of the alleged falsehood refused to comply with the original correction direction. 

During a parliament sitting Monday, asked about how POFMA seemed to be applied, Communications and Information Minister S. Iswaran said: "That is a convergence. Some might say unfortunate convergence or coincidence. It may also indicate a certain pattern of communication that exists out there."

The minister also refuted suggestions the government was focused only on online falsehoods that were political in nature, adding that the POFMA Office, which administers the legislation, was monitoring for false statements but its resources were limited. 

Just enough resources, it seems, to dish out five correction directions in four weeks.

A lot already has been said about the government's use of POFMA, thus far. Some commentators note that this piece of legislation has become a way for the government to force its right of reply and ensure its side of the story is told--all without the tedious task of bringing a publication to court.

Others have pointed out that the statements alleged to be false weren't actually proven to be so by the government, which instead argued against the statements with its own point of view, without itself providing all the data or facts to support its rebuttals.

Most importantly, for me, the government seems to be missing the point in its current use of POFMA.

Lest anyone forgets, the legislation was mooted as a way to "protect society" against online false news created by "malicious actors". In explaining the need for the legislation, the Law Ministry had argued that false news could be used to divide society, spread hate, and weaken democratic institutions. 

So far, none of the parties that were issued correction directions fit this description. 

It should be noted, too, that many including the government's political opponents support the need for POFMA, especially amidst the insurgence of state actors working to influence elections worldwide.

Real threat lies online, beyond local opponents

A new batch of documents from Cambridge Analytica is expected to be leaked in coming months, revealing a global operation spanning 68 countries that worked to manipulate voters on "an industrial scale". This included links to material on elections in countries such as Malaysia and Brazil. 

At the GovWare conference in October, speakers warned that cyber attacks would continue to target elections to influence political agenda and influence opinions. Moving forward, these would be increasingly sophisticated and organised. 

The Cognitive Security Institute's executive director Tim Hwang noted that interference tactics used during the US 2016 elections had no clear strategy and did not use advanced algorithm. State actors then had used online ads to spread propaganda and "classic" cyber attacks to discredit politicians and organisations. They also tapped bots to spread their false messaging. 

These would not remain the same, Hwang warned, noting that numerous state actors had invested in new disinformation techniques. He urged governments to recognise this new landscape of offensive capabilities. 

Specifically, he highlighted three key trends that were emerging. First, the rise of synthetic media such as deepfakes and bots to create seemingly believable videos and engage citizens in conversations that appeared legitimate. As such technologies advanced, he said, election interference would involve campaigns that were more believable and state actors would look to automate such tools with the aim to expand their reach. 

They also would transition to smaller and more private platforms, moving conversations from Facebook to fringe social media platforms Fab and from Twitter to WhatsApp, he said. This made it tougher to detect bad actors. In addition, some of these smaller platforms were hotbeds for already radicalised users, making it easier to launch disinformation campaigns. 

With the goal then to prevent the spread of such campaigns, Hwang noted that some governments might choose to exert more control, such as requiring greater level of review before information could be shared online or limiting the presence of public domains that failed to meet certain standards. 

However, such heavy-handed controls were difficult to implement in democratic societies that depended on public discourse. In fact, the state's interference could erode public trust, he said. 

In other words, over relying on laws such as POFMA could lead to fatigue and citizens might end up distrusting and ignoring correction directions, especially if these appeared next to any article that was critical of their government. 

Hwang acknowledged that getting the balance right was tricky, noting that the goal ultimately was to guide the public to recognise falsehoods themselves so such campaigns could be foiled, and prevented from spreading, without state interference. This approach, too, could be more powerful since it would not require the government to have to keep up with changes. 

He further recommended creating public systems of detection to alert the public when disinformation campaigns were playing out, similar to air-raid sirens to warn of an impending storm. This would ensure citizens were aware of such activities and could play a role in uncovering these operations. 

Countries also should carry out research and measure their resilience against handling disinformation, so gaps could be plugged.

And with the growth of such campaigns, vendors such as Blackbird.AI have popped up peddling tools powered by artificial intelligence (AI) to identify patterns of disinformation and help governments and citizens detect false content online, including memes and videos. 

Holding laws to a higher standard

The Singapore government is right. There are malicious actors at play and we need to guard against attempts to spread online falsehoods to divide society, spread hate, and weaken democratic institutions. 

POFMA was created for this very reason, and it should be used to safeguard citizens only when there is a real threat to their society and national security. And there are bigger cyberthreats out there--than political opponents--that have access to more advanced tools and the potential to do serious, real damage to social cohesion. 

As the Singapore Democratic Party suggests: "POFMA, like this government, should be held to higher standards."

And as Hwang points out, legislation alone isn't enough and may not always be effective in thwarting disinformation. Citizens need to be discerning about statements they read online and provided some guidance to do so. When they can judge for themselves what are falsehoods and what are not, laws such as POFMA won't need to be invoked at all and government resources can be diverted to where they can yield more value for citizens.

RELATED COVERAGE



from Latest Topic for ZDNet in... https://ift.tt/2T1rAAb

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.