Tuesday, June 28, 2022

Are big tech getting too judgemental?

Decisions, Decisions

A few days ago I was at a meet-up in Santa Clara. This was my first meet-up since the start of the pandemic, and while the concern for COVID still lingers, it was a great way to get back out into society since the pandemic began. One conversation in particular sparked an interesting debate.

At the meetup, a woman and I struck up a conversation about privacy. I believe she works for WhatsApp. While she did not explicitly confirm this, based on our conversation, it was fairly easy to guess.

The crux of the matter was presented as follows: Would you still value your privacy if we were working on an algorithm that detects scammers? The sort of technology needed to do so requires data analysis, and the company would need to read your conversation in order to obtain proper data sets.

At first, I thought she presented a valid point. Who would actually want to get scammed? Certainly not me.

However, after giving it some more thought, a counterargument came to me: who are they to determine who is a scammer? Who gave them authority to label someone as such, and what are the parameters for such a decision? If you label someone wrongfully as a scam artist, who will shoulder the responsibility for consequent byproducts of such labeling?

Not long ago, Donald Trump’s Twitter account was banned. We won’t get into the nitty-gritty of politics here. He wasn’t the first user banned, nor will he be the last. However, the question stands: what authority does a social media site like Twitter have to provide judgment before a court of law is able to? Our legal system is responsible for determining shameful acts of attacking the temple of democracy, is it not? Why then would we let the onus fall on social media companies to do the legwork? It could be argued that targeted advertising is a slippery slope that leads to disinformation. This invites a discussion into what responsibilities ultimately fall onto social media sites when it comes to regulating user access.

Responsibilities

With the expanded prevalence of how much we share on social media channels, one person’s bad day could turn into a life-long ban from a social media service. Conversely, repeat offenders continue to do what they want, when they want. Wouldn’t that be classified as a failure on behalf of the social media company that claims they are doing the right thing?

Remember Logan Paul? For the uninitiated, he is a social media personality with millions of YouTube subscribers. While his popularity and channel remain intact, he’s been under fire for some problematic content in the past. Regardless of his questionable content, he continues to make millions in revenue from YouTube with no impact to his overall business. On the other end of the YouTube spectrum, a woman who narrates sleep stories for adults in her own soothing voice is told that her work is not original enough and thus cannot monetize her business. (Reference video for example https://www.youtube.com/watch?v=qdDVHyJDaFs). She was notified out of the blue that her videos would be demonetized without any prior warning.

When looking at the above examples relating to content creators, the chasm between the content they provide couldn’t be more different. Yet, one creator had the rug pulled right out from under her. What sort of parameters did YouTube abide by in order to come to that determination? In this case of a woman who creates and narrates sleep stories, the reasoning seemed vague, if not altogether illogical.

Exercising Power

Users get banned on social media sites frequently, generally without much fanfare. When it came to Donald Trump’s exit from Twitter’s social platform, a common question arose: how far is too far when companies that hold power exercise said power in shaping what their users see?

Bringing back the question of privacy and scammers, companies would need to decide what constitutes a scam, what parameters would look like when searching for scammers, and how to implement limits on users without invading privacy.

The nuances of each step call into question how companies could legally make such assessments. Implementation of such practices would need to include enough oversight so as not to compromise user privacy.

The banning of certain high-profile figures has been discussed at length. When it comes to general users and public perception, companies would need to provide more transparency as to what practices would look like in terms of identifying online scammers. Yet, the question remains nuanced as to who has authority over classifying who a scammer is. A different, but adjacent, question might be: how much should social media sites regulate misinformation and disinformation?

Alternately, social media sites can work in tandem with government entities to protect people online while still retaining best privacy practices. Unfortunately, scam artists have an extremely prevalent presence online. Scammers can target credit card data and personal information through social media. All of this can lead to financial losses and even identity theft.

Transparency is Key

However, the question isn’t how much harm a scammer can do. It’s well known that scam artists can access information discreetly and disappear once they have what they need. When talking of privacy, if a company operates under the guise of protecting consumers, they can seemingly remove perfectly acceptable accounts. Such was the case with the woman simply creating sleep stories for people who have trouble falling asleep.

Platforms should be focusing on transparency in order to build consumer trust. Furthermore, platforms should be willing to protect consumers without compromising their privacy. It’s a tricky balance to find, and one that requires continual discussion if we are to hold companies accountable for how much power they are able to wield in the digital space.



from Hacker News https://ift.tt/9TB0YDO

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.