Thursday, November 2, 2017

Facebook's plan to throw humans at security, manipulation issue won't work, equates to indictment on AI progress

Facebook has a fake news and manipulation problem and you can add artificial intelligence to the list too since the company has to double security-related headcount just to maintain trust and avoid being regulated.

The social network's third quarter earnings report was another stellar one as marketers flock to the platform. Marketers love Facebook. And why not? You can tweak the hell out of sentiment, micro-target groups and pitch propaganda as well as products. Just ask the Russians.

Facebook executives have been hauled before Congress along with bigwigs from Google and Twitter to give details on how the Russian government aimed to sow dissent ahead of the 2016 election. CEO Mark Zuckerberg addressed Facebook's issues on a conference call with analysts.

Four things we learned when Facebook, Google, Twitter testified in Russia inquiry | Leaked: Facebook security boss says its corporate network is run "like a college campus" |

After noting Facebook's first-ever quarter with $10 billion in revenue, he said:

But none of that matters if our services are used in a way that doesn't bring people closer together or the foundation of our society is undermined by foreign interference. I've expressed how upset I am that the Russians tried to use our tools to sow mistrust. We built these tools to help people connect and to bring us closer together, and they used them to try to undermine our values. What they did is wrong, and we are not going to stand for it.

Now for those who followed Facebook, you know that when we set our minds to something, we're going to do it. It may be harder than we realized up-front and may take longer and we won't be perfect, but we will get it done. We're bringing the same intensity to these security issues that we brought to any adversary or challenge that we have faced.

Zuckerberg went on to promise more transparency and a higher standard for ad disclosure.

For Facebook, the crisis isn't due to Russians tinkering with election sentiment. The crisis for Facebook is trust. You are the product. If you don't trust Facebook's information you may not engage as much. Facebook needs you to pass along information. The fact that there is shock--shock I tell you--over how Facebook can be used to manipulate the masses is almost comical. After all, those tools are the same reason marketers are fueling Facebook's financial dominance over the ad industry (with Google of course).

But this rant isn't an indictment of social media lemmings or Facebook's controls or approach to ads. The Facebook conference call--and Zuckerberg's solution to double headcount on security and throw humans at the fake news and trust issue--is really an indictment on its AI prowess. Facebook simply doesn't have the tools or AI engineering to automate its way out of its mess.

screen-shot-2017-11-01-at-4-30-57-pm.png

Edison Investment Research analyst Richard Windsor estimated that 50 percent to 60% of Facebook employees will be working at non-revenue generating positions. Rivals won't have to add humans because they have better AI, said Windsor.

"There will be no need for a corresponding increase of headcount at Google, Baidu, Yandex, Microsoft, Amazon or Apple to deal with these problems as these companies are much better positioned to create a solution using AI," said Windsor. "It seems that whenever Facebook attempts to automate anything, it inevitably goes awry resulting in the need for more humans to fix the problem. Furthermore, humans are ill suited to solve these kinds of problems as it takes far too long to find and remove the relevant content. This process has to be automated to be effective and as a result, Facebook's costs are going to rise and the problem is unlikely to be solved."

Zuckerberg said:

This is part of a much bigger focus on protecting the security and integrity of our platform and the safety of our community. It goes beyond elections, and it means strengthening all of our systems to prevent abuse and harmful content.

We're doing a lot here with investments both in people and technology. Some of this is focused on finding bad actors and bad behavior. Some is focused on removing false news, hate speech, bullying and other problematic content that we don't want in our community. We already have about 10,000 people working on safety and security, and we're planning to double that to 20,000 in the next year to better enforce our community standards and review ads.

In many places, we're doubling or more our engineering efforts focused on security. And we're also building new AI to detect bad content and bad actors just like we've done with terrorist propaganda. I am dead serious about this. And the reason I'm talking about this on our earnings call is that I've directed our teams to invest so much in security on top of the other investments we're making that it will significantly impact our profitability going forward. And I wanted our investors to hear that directly from me. I believe this will make our society stronger and, in doing so, will be good for all of us over the long term. But I want to be clear about what our priority is: protecting our community is more important than maximizing our profit.

Research and development spending will rise to combat fraud too, said Zuckerberg.

Oracle CTO Larry Ellison recently described security in Terminator terms. It's a battle of machine learning, automation and AI. It's your computing system vs. computing system.

In other words, throwing humans--some Facebook employees and some at partners--at the security issue may not do much. The natural question is why Facebook didn't have more resources devoted to security before.

Sure, Facebook's systems aren't going to improve overnight. Facebook also revealed that 10 percent of its global monthly active users are fake, up from a previous estimate of 6 percent. That metric sounds like bad news, but shows that Facebook's detection tools have improved. Inauthentic accounts--used for spam mostly--are 2 percent to 3 percent of worldwide monthly active users.

Zuckerberg gets that if Facebook doesn't improve security and content engagement will fall. Today, engagement and ad revenue is fine. Anecdotally, you can spin in a circle and hit someone who will tell you they've curtailed their Facebook usage over the last year. Fortunately, Facebook bought Instagram, which is essentially the new Facebook for many folks.

Zuckerberg said:

Let me be clear on this that people do not want false news or hate speech or bullying or any of the bad content that we're talking about. So to the extent that we can eradicate that from the platform, that will create a better product, which will also create a stronger long-term community and better business as well.

So the reason why we haven't been able to get these things to the level that we want today is not because we somehow want them on the platform; it's that it's a really hard problem. And we're going to invest both in people and technology because we think that both are really important parts of the solution here to go after all different parts of these problems. And that was what I tried to stress earlier on. We're going from 10,000 people working on safety and security to more than doubling that to 20,000. We're building -- we're doubling, in some cases, more our engineering teams focused on security. We're building AI to go after more different areas of harmful content and finding fake accounts and other bad actors in the system. And I expect that all of these things will make our product better over the long term, but we will incur the expenses a lot sooner as we ramp up these efforts.

Related:

AI:



from Latest Topic for ZDNet in... http://ift.tt/2zdAsZG

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.