Tuesday, October 31, 2017

Google: Russian groups did use our ads and YouTube to influence 2016 elections

kentwalkergoogleb.jpg

Google general counsel Kent Walker: "There is no amount of interference that is acceptable."

Image: Bundesministerium für Wirtschaft und Energie/Google

Along with Facebook and Twitter, Google has now revealed details of Kremlin-linked groups buying ads and using YouTube to spread disinformation during the lead-up to US 2016 election.

Google has shared its report ahead of this week's series of congressional hearings, where lawyers from Facebook, Google and Twitter will be asked how non-US groups used major web platforms to influence voters.

Google says two accounts associated with Russian 'troll farm', the Internet Research Agency, spent $4,700 on Google ads during the election cycle.

Google said the ads were not targeted at specific states and did not target users with specific political leanings. This was the first election where Google offered to target ads based on users' "inferred political preferences", such as "left-leaning" or "right-leaning".

Also, it found 1,108 YouTube videos totaling 43 hours of content that were probably linked to this campaign and generated 309,000 US views during the run-up to the election. It said only three percent of the videos generated more than 5,000 views.

The YouTube videos were shared through 18 channels with English content that "appeared to be political" but also contained non-political videos. Google says it has suspended the channels.

"While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable," said Kent Walker, a senior vice president and Google's general counsel in a blog.

Facebook previously reported that the Internet Research Agency spent $100,000 on Facebook ads that reached 10 million users.

However, as noted by The Washington Post, Facebook's legal counsel Colin Stretch plans to tell the Senate Judiciary Committee this week that the Internet Research Agency created about 80,000 inflammatory posts between 2015 and 2017 that 29 million US people may have seen in their newsfeed.

After factoring in likes and shares, Facebook estimates the posts were seen by as many as 126 million US users.

Similarly, Twitter has previously said it found 201 accounts used by Russian agents, but at a hearing this week its acting general counsel, Sean Edgett, will testify that nearly 37,000 accounts, mostly bots, generated 1.4 million election-related tweets which gained 288 million views.

Google also published an explanation of what it plans to do combat the misuse of its platform to spread disinformation in the future and another brief documenting new protections against phishing, such as the new Advanced Protection program for Gmail users who face a high risk of being targeted.

Future initiatives include publishing a transparency report for elections, and the creation of a publicly accessible database of election ads purchased on AdWords and YouTube. The database will include information about who bought each ad.

Also, to comply with US laws that restrict groups outside the US from running election ads, it will introduce a new checks to "proactively identify" who's buying a political ad and where they are based before running them.

Google said it had processes in place for the 2016 election that only allow US advertisers it already did business with to target users with a certain political persuasion.



from Latest Topic for ZDNet in... http://ift.tt/2z5uzgO

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.