Social media and misinformation: It's a game of whack-a-mole

Social media and misinformation: It's a game of whack-a-mole


2 photos
Save Story
Leer en español

Estimated read time: 5-6 minutes

This archived news story is available only for your personal, non-commercial use. Information in the story may be outdated or superseded by additional information. Reading or replaying the story in its archived form does not constitute a republication of the story.

NEW YORK (AP) — It's a high-stakes game of whack-a-mole with no end in sight.

Social media companies are fighting an expensive and increasingly complex battle against Russian trolls who are using catchy memes, bots and fake accounts to influence elections and sow discord in the U.S. and beyond.

This week, two reports released by the Senate Intelligence Committee gave strong evidence that Moscow's sweeping online disinformation campaign was more far-reaching than originally thought, with agents working to divide Americans by race, religion and ideology and erode trust in U.S. institutions.

It is also clear that the culprits are learning from one another and quickly adapting to sophisticated countermeasures taken against them.

Here are some questions and answers about the efforts to combat misinformation.

WHAT ARE THE TROLLS DOING?

When it comes to election meddling, much of the focus for the past two years has been on the biggest internet platforms, especially Facebook, where agents in Russia (as well as Iran and elsewhere) have used phony accounts to spread fake news and divisive messages.

But the latest reports offer more proof that the Russians went beyond the social media giant, taking advantage of smaller services like Pinterest, Reddit, music apps and even the mobile game Pokemon Go. Instagram, Facebook's photo-sharing app, was also found to have played a far bigger role than previously understood.

In many ways, the Russian operation works like a corporate branding campaign, except in this case, the goal is not to sell running shoes but to sway elections. On Facebook, agents might post links to fake news articles, or slogans pitting immigrants against veterans or liberals against conservatives.

One image showed a ragged, bearded man in a U.S. Navy cap. It urged people to like and share "if you think our veterans must get benefits before refugees." Another post had a photograph of the Rev. Martin Luther King Jr. and the words "Enough dreaming, wake your ass up."

On Instagram, the post with the most interactions was a photo showing a row of women's bare legs, ranging from pale white to dark brown, with the caption "All the tones are nude! Get over it!" The image had over a quarter-million likes.

Many of the posts and memes were not incendiary and didn't contain anything that could get them promptly banned from social networks for violating their standards against hate speech or nudity, for example. Instead, they looked like the ordinary sorts of things regular people might share on Facebook, Twitter or Instagram.

WHAT ARE THE COMPANIES DOING?

Caught off guard by Russian meddling in the 2016 U.S. elections, giants like Facebook, Google and Twitter have thrown millions of dollars, tens of thousands of people and what they say are their best technical efforts into fighting fake news, propaganda and hate.

They are using artificial intelligence to root out fake accounts or to identify bots that post divisive content. For example, while a human might post at random moments and needs sleep, a bot may give itself away by tweeting at all hours of the day, at fixed times, such as on the hour.

Of course, malicious actors are learning to sidestep these countermeasures. Bots are being designed to act more like humans and stop sending tweets out at fixed intervals. Or users who are operating fake accounts change their identities rapidly and delete their tweets to cover their tracks.

Some companies have made progress. Facebook's efforts, for example, appear to have reduced trafficking in fake news on its platform since the 2016 election.

But some of these efforts go against these companies' business interests, at least in the short term. In July, for example, Facebook announced that heavy spending on security and content control, coupled with other business shifts, would hold down growth and profits. Investors reacted by knocking $119 billion off the company's market value.

Smaller platforms have fewer resources to throw at the problem, and that is one reason the trolls have moved on to them.

WHY AREN'T THE COMPANIES DOING MORE?

Created to sign up as many users as possible and have them posting, liking and commenting as often as possible, social networks are, by design, easy to flood with information. And bad information, if it's catchy, can spread faster than a boring but true news story.

Companies like Facebook and its competitors have also built their business on letting advertisers target users based on their interests, where they live and a multitude of other categories. Trolls sponsored by malicious governments can do the same thing, buying ads that automatically target people according to their political leanings, ethnicity or whether they live in a swing state, for example.

Some companies have taken countermeasures against that. But critics say that unless companies like Facebook change their ad-supported business models, the exploitation is not going to stop.

Filippo Menczer, a professor of informatics and computer science at Indiana University, said the problem is a very difficult one to solve.

Facebook, for example, has focused a lot of its efforts on working with outside fact-checkers to root out fake news and suppress the spread of information that has been deemed false. But those items are only a part of the problem. Fact-checking doesn't necessarily screen out memes and other more subtle means of shaping people's opinions.

"A lot of the stuff is not necessarily false, but misleading or opinion," Menczer said.

WHEN WILL THIS END?

According to one of the Senate-released reports, 2016 and 2017 saw "significant efforts" to disrupt elections around the world.

"We cannot wait for national courts to address the technicalities of infractions after running an election or referendum," the Oxford researchers warned. "Protecting our democracies now means setting the rules of fair play before voting day, not after."

There are also new threats, already seen in countries such as Myanmar and Sri Lanka, where messaging apps like Facebook's WhatsApp have been instrumental in spreading misinformation and leading to violence. With these apps, the messages are private, and even the platforms themselves can't get access to them as they try to combat those trying to spread havoc, Menczer said.

Menczer said the cost of getting into the misinformation game is low. The entire campaign by Russia, he said, might have involved a few dozen employees and an advertising budget in the tens of thousands of dollars.

"Clearly, they will continue," he said. "There is no reason why they wouldn't."

Copyright © The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Photos

Most recent Business stories

Related topics

Business
Barbara Ortutay
    KSL.com Beyond Series

    KSL Weather Forecast

    KSL Weather Forecast
    Play button