As the 2020 presidential campaign ramps up, so does the threat of disinformation on the internet. A recent report from the University of Oxford called the creation of disinformation and manipulated media a “common communications strategy” by cyber troops worldwide.

As I did last year, I’ve turned to a media literacy expert for advice on handling news and information in today’s environment. This year, I interviewed with Julie Smith, the author of “Master the Media: How Teaching Media Literacy Can Save Our Plugged-In World.” Julie is a frequent presenter of media literacy workshops, and a teacher of media literacy at Webster University.

SABLEMAN: Julie, what is the biggest problem involving misinformation today? Is it overtly false news about political candidates, false news about issues or something else?

SMITH: I think the greatest danger is that we are losing our idea of what it means to be informed. We are more interested in what we believe rather than what is true, which means we are living in a post-truth world. Of course, election seasons amplify the volume of information and then inflate the passion of its spread.

SABLEMAN: Who is inflating these passions? U.S. partisans? Foreign influencers?

SMITH: It’s both, and it’s commercial actors as well. We know from commercial click-farms that there is a financial motivation beyond some disinformation. Young people in Macedonia made thousands of dollars per month via Google Ad Sales because they were able to create click-bait headlines about the 2016 presidential candidates that led to increased website views.

SABLEMAN: Why did this problem come upon us all of a sudden? Who’s to blame?

SMITH: I think technology is advancing faster than our ability to evaluate it. In the last year, we have seen a big increase in deep fakes and shallow fakes being used to create false impressions. And there has been a lot of discussion about the technology of creating fake images and videos. Yet in my workshops, fewer than 20% of the attendees admit to even knowing about these techniques.

Also we’ve seen the democratization of content or “news” production, lowered trust in traditional media, and more and more time within the echo chambers of our social media platforms. This has created a “perfect storm” situation where dis- and misinformation can travel at the speed of light without being verified, especially messages that affirm our own biases.

So rather than place blame, I’m more concerned about our abilities to identify, evaluate and discern information rather than the information’s source or its intent. All misinformation is problematic, regardless of source.

SABLEMAN: Are you concerned about a Russian disinformation campaign in 2020, and if so why?

SMITH: I’m concerned about all disinformation, whether it comes from Russia, Macedonia or an elderly relative who doesn’t know any better. But there is certainly a heightened concern about Russian misinformation during the election campaign.

SABLEMAN: Is an ordinary person likely to encounter Russian disinformation?

SMITH: My instinct is to say “yes,” although I have no particular data to prove that. There’s an interesting website called “Hamilton 68” which analyzes over 600 Twitter accounts that they have been traced to Russia. They tweet and retweet each other frequently, so the volume they produce makes it likely that an ordinary citizen would be exposed.

Interestingly, these accounts push a lot of content that isn’t overtly political. One Facebook page that was traced to Ukraine and recently taken down featured images and memes celebrating the “American dream.” The Facebook page “Blacktivist,” which was traced to Russia, actually had more engagements than the official “Black Lives Matter” page.

SABLEMAN: What can internet publishers, including social media companies, do to combat disinformation campaigns?

SMITH: In an ideal world, these platforms would verify identities and not allow anonymous postings. They would care more about authenticity than they care about clicks, time spent on the platform and advertising revenue. But the economic structure of the current system makes that unlikely. The volume of information involved also makes any supervision or monitoring nearly impossible.

Facebook pages now have a link where a user can click and find out the “source” of the page. The platform installed that feature in an attempt to make the sources of pages more transparent. In practice, however, this feature won’t stop misinformation from being posted, and only relatively sophisticated persons are likely to click through or understand what it means.

Ultimately the onus falls on us, the readers. It is up to us to evaluate our own biases, not share online misinformation, call it out when we see it, and share ways to evaluate information.

SABLEMAN: If the onus is on us, then each of us need to become more media literate. Explain your discipline, media literacy, and how it can help.

SMITH: Media literacy encourages media consumers to ask questions:

  • Who is the sender of the message?
  • What is their motive or intent?
  • What information is left out?
  • How is the message constructed?
  • What lifestyle or point of view does the message reflect?

These are just a few of the questions that critical consumers should ask about media messages. So if we find a message that has a suspicious source, we should ask ourselves: what is their motive or intent?

“News” no longer only comes from by trained journalists and editors. There are no gatekeepers anymore — the information world is like the Wild West. So it is a 21st Century survival skill for readers to ask these questions.

SABLEMAN: What are the best media literacy tools that ordinary people can use to protect themselves from Russian — and other — misinformation and disinformation?

SMITH: There are several steps, although there are no foolproof answers.

The first step is to acknowledge our own biases. We have nearly unlimited sources for information now, it’s very easy to choose the messages that we like the most rather than perhaps the ones we need to hear. So we need to be aware of the sources that we choose, and why.

We also need to be suspicious of any message that generates a strong emotional response. That’s usually a clue that the message should be investigated for authenticity. Remember, the more time we click or spend on a website, the more money the site makes. So anything that will get us riled up – clicking and sharing – is suspect.

On the practical side, there are tools that one can use to verify accounts, images and information. Google Reverse Image Search is a great tool to discover the original source of online photos. I use BotoMeter to determine if my new Twitter follower might be a bot. Don't forget the classics like Snopes, HoaxSlayer, Emergent or Verification Junkie.

Most of all we need to be careful that we don’t slide into constant cynicism. My class had a discussion last semester about which was worse: believing everything, or believing nothing. My students, sadly, believe nothing.

I suggest Ronald Reagan’s quote from when he was dealing with Mikhail Gorbachev: “Trust, but verify."

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.