Imagine one day you're going about your business, dropping your children off to school, attending work, and returning home with groceries for dinner. The next day, you're considered "illegal" in the country that you were born in and put on a list of "foreigners" scheduled to be dispatched unless you can prove your citizenship.
What's worse, your community is targeted relentlessly on Facebook with the most evil of abuses and exhortations that aim to finish you, your family, and your community off. Meanwhile, the social media giant is not able to or willing to properly intervene and put a stop to the hate speech.
This isn't a dystopian exercise in fiction but actually the on-ground situation faced by the Bengali Muslim community in the northeastern Indian state of Assam, thanks to a breathtaking act of cruelty architected by Indian Prime Minister Narendra Modi and his lieutenant Amit Shah, who have designed a program to strip close to 2 million citizens of their citizenship.
They and their party, the rightwing, nationalist BJP, enacted an election pledge via the National Register of Citizens (NRC) to throw out any Bengali Muslim in Assam who cannot prove their family was in the country prior to 1971 when Bangladesh, then known, as East Pakistan was formed. For background, India, Pakistan, and Bangladesh used to be one country prior to the British colonial endgame in the subcontinent that led to Partition. Assam has a population of 30 million and at least 10 million are Muslim.
This in itself is a monumental tragedy for the Bengali Muslims. What's worse is that Facebook is allowing its platform to be used to foment a potential genocide by allowing hate speech that targets the Bengali minority, according to TechCrunch. Aavaz, a non-profit organisation that works on human rights issues, sifted through 800 posts on the platform related to Assam and the NRC in its report titled Megaphone for Hate: Disinformation and Hate Speech on Facebook During Assam's Citizenship Count and found there were at least 100,000 comments posted that qualified as hate speech. These were viewed at least 5.4 million times and shared 99,650 times. The comments posted were a fountain of hate speech, ranging from the minority Muslim population being labelled as "criminals", "dogs", "pigs", "rapists", and "terrorists" to calls for their extermination. Other comments asked for people to "poison" the Muslim minority's daughters and for the legalisation of female foeticide. Avaaz senior campaigner Alaphia Zoyab said that Facebook is essentially being used as a "megaphone for hate" against the vulnerable minorities in Assam.
See also: Facebook Q3 tops estimates but user growth remains flat
Facebook responded to the accusations by saying that it has always had an eye on safeguarding the rights of minorities and marginalised communities around the world, including India, and that content violating its policies would be removed as soon as Facebook was made aware of it. Clearly, these efforts have been utterly ineffectual based on the statistics unearthed by Avaaz.
"When we flagged the hate speech in Assam online using Facebook's online reporting tools, Facebook sent us back automated messages saying that this does not breach their community standards," Zoyab says in Time magazine. "Facebook keeps saying it has a zero tolerance policy toward hate speech, but Assam seems to prove that it's a one hundred percent failure."
If Avaaz's warnings seem like hyperbole, you only have to look back at Facebook's role in the extermination campaign of the Rohingya people by extremist Buddhists in Myanmar, who slaughtered and raped tens of thousands and drove close to 700,000 people to flee the country into Bangladesh. Facebook itself admitted that it had failed the Rohingya people. The Guardian mentions that Facebook's product policy manager admitted in a blogpost that Facebook had done very little to prevent this kind of abuse.
Problem is, Facebook relies on a relatively miniscule number of people to police its content. According to the TechCrunch report, Facebook, in a generalised answer to its queries on the situation in Assam, said it has reviewers who are Assamese that review content in the language. But when you consider that the company employs around 35,000 globally for trust and safety, with less than half being responsible solely for content reviewing across a user base of 2 billion people, that's sort of like using a watering can to fight a forest fire.
So what about the reliance on AI, the new and luminous tech cure-all in all of this? It simply isn't working, says Zoyab. "Our research shows that reliance is based on a false premise, because it assumes people are flagging hate-speech, which then teaches its artificial intelligence systems. That's not happening," she adds in Time magazine.
Zuckerburg himself admitted to US lawmakers that it would probably take five to 10 years to develop AI tools that could negotiate basic linguistic nuances in different types of content before flagging it to Facebook content personnel. "Today we're just not there on that," he said.
So, what we have is inefficient AI and a tiny amount of content reviewers to stymie some of the most dangerous avalanches of hate speech that resides on the company's own platform. This is the kind of policing, by the company's own admission, that has already led to one genocide.
Avaz's Zoyab says, "When you become stateless, you essentially lose your right to have rights."
"Overall we just find Facebook is asleep at the wheel here in protecting the world's most vulnerable people."
Related coverage
from Latest Topic for ZDNet in... https://ift.tt/2q2fhHj
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.