What is misinformation (and why take a hot second)?

Done-for-you health articles

*ad

I recently chatted with my friend Lindsay about a new study on how to recognize and stop the spread of misinformation via social media. To listen in, check out the podcast episode below. For a more detailed look, keep reading.

You’ve heard about epidemics and pandemics, but there’s a new word that’s been making rounds the last few months: infodemic. This has to do with the current epidemic of information. What is misinformation? How can you recognize it? And what simple tiny thing can we all do to stop it in its tracks? (Based on science, of course!)

 

Misinformation is . . .

 

Right now there’s an epidemic of information (infodemic)

 

The word infodemic was coined by Dr. Tedros at the WHO back in February. An infodemic is like an epidemic (or even bigger global pandemic) of news. 

And infodemic is an overabundance of information—some accurate and some not—that occurs during an epidemic. (WHO)

In 2018, researchers from MIT (Massachusetts Institute of Technology) did a study on how fake news is spread. You won’t be surprised to learn it spreads faster, farther, deeper, and more broadly than the truth. That’s because of the qualities of fake news. It’s novel and elicits fear, disgust, and surprise. It takes advantage of our natural human tendency to warn others of an upcoming threat.

So, as with all science, let’s conduct a study to figure out how to stop the spread of misinformation online. The study that we’re going to talk about today goes into specifically into stopping the spread of COVID-19 misinformation via social media shares.

scientific study papers

Study: Fighting COVID-19 misinformation on social media

 

Let’s talk about a new study that was published on June 30, 2020 in which the researchers say,

“We present evidence that people share false claims about COVID-19, partly because they simply fail to think sufficiently about whether or not the content is accurate when deciding what to share.” 

The study is called, “Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for Scalable Accuracy Nudge Intervention.” It was a collaborative effort between Canada’s University of Regina and the US-based MIT.

This is the study we’ll talk about in this blog post.

Let’s start with an example of what misinformation is NOT.

 

Misinformation is not a hypothesis that gets tested

 

When it comes to COVID-19, there is so much information out there that everybody’s confused. Researchers are online critiquing each other’s studies for the world to see. Media are hyping non-peer reviewed studies (preprints) as though they hold as much weight as peer-reviewed ones. I’m trying to follow credible people. We have information about masks that’s been changing over the last few months.

This is legit confusion.

Every time I go online, it’s like, “We should be doing this,” “But you said we shouldn’t do this,” “What about this,” “We’re finding this.” Oh, my gosh, I get it!

An example of what misinformation is NOT is when a hypothesis gets tested. A hypothesis is essentially a “best guess.” This means that when we have little research on a new subject (e.g., Do masks help stop the spread of the novel SARS-CoV-2 virus?), scientists make a hypothesis based on the best info they have at the time.

Unfortunately, fast-spreading deadly viruses that are in virtually every country demand quick decision making in the absence of mounds of reliable evidence. This means they take related information and come up with a “best guess” hypothesis so that policymakers, agencies, and governments can make informed decisions.

Then, that hypothesis (e.g., masks won’t help and they may make things worse) will be put to the test to see how correct and effective it is in real life (IRL).

NOTE: We now have many studies that show masks, in fact, do help to stop the spread of SARS-CoV-2. Read on to find out how this happened and what makes this changing recommendation different from misinformation. . . .

Example: Why changing mask recommendations were not misinformation

Let’s use mask recommendations, for example. Back in February/March masks were discouraged based on limited data and assumptions because that’s simply the best info we had at the time.

At that time, we had some info (but not a ton) on:

  • The difference in viral transmission in countries that mask versus those that don’t?
  • How the virus spreads in certain places, like in hospitals (that use masks) or airplanes (that don’t use masks)?
  • Did masks work to stop the spread of the related SARS several years ago (SARS-CoV-1) and how they were used and by whom?
  • What type of fabric or layers work best?
  • How do people actually use masks IRL (human behaviour)? Do they forget them sometimes? Touch them a lot? Use them properly (cover their nose and mouth)? Wash them regularly?
  • Do masks exacerbate other issues like asthma, skin conditions, anxiety, etc.?
  • Are people who regularly wear masks IRL associated with fewer cases and clusters of COVID-19?
  • Does wearing a mask make people stop distancing and handwashing?

There were so many questions to answer before changing the initial recommendation to not wear masks. (You wouldn’t want recommendations changed without data, right?)

Here’s a two minute video on how we can look at answers to these (and other) related questions to understand whether masks help and if so, by how much and is it worth it? (One study = one puzzle piece.)

Now there is a growing body of evidence that simple 2-3 layer cloth masks help reduce the spread. Are they 100 percent effective (unlike N95’s that are 95 percent effective)? No. Are they a low-cost easy-to-do action that, along with distancing, handwashing, ventilation will reduce the spread? Yes.

The changing mask recommendations were not misinformation. This is because they were based on the best evidence at the time. This is how science unravels mysteries and answers questions. By testing hypotheses and updating our knowledge as we learn more and see the bigger picture.

In fact, this is science working. It’s working to understand how this particular virus is spread and it’s all happening over time and it’s all happening in public view. Science is messy and we need to make sense of it. Some studies are stronger than others (and I would represent those studies with bigger puzzle pieces). Some hypotheses are found to be incomplete or incorrect. And we need to make studies public so they can be critiqued by fellow scientists so we keep improving methods, building on what we know, and moving our knowledge forward.

And when you know better, you can do better.

Of course, as with most things in biology, there isn’t a black and white; yes and no; hundred and zero. Understanding things is about taking all of this grey, uncertain growing numbers of puzzle pieces and putting it together.

Example: Grade 10 science experiments don’t represent the complexities of clinical research

When we learn science and do simple grade 10 experiments we can see the pH colour change of something right away (a solution changes colour). This chemical change makes science looks like everything is clear. Like there is a “100 percent proved right” or “100 percent proved wrong.”

It makes science look simple, when really, we’re just doing the very simple experiments when we’re in high school. Things start getting really interesting when you finish your science degree and start doing your own research in grad school. Plus, clinical research that looks at people, gets really complicated, takes time, and the answers are rarely ever black and white. They’re almost always some sort of percent improvement or risk. Sometimes 10 percent or 32 percent or 64 percent–very rarely is it 100 percent or more. 

That’s why when we recommend things that are not 100 percent effective, it’s because they’ve been shown to help. That’s why masks aren’t a substitute for physical distancing, enjoying the great outdoors, or washing your hands and high-touch items. They’re all layers of protection that work together to prevent the spread.

 

What is misinformation?

 

Misinformation is simply information that’s demonstrably false (or fake). It’s often driven by an agenda. Whether that agenda is to make you laugh, surprise you, convince you to buy or do something, or erode your trust in experts, authorities, organizations, etc. 

There are several types of misinformation. On one hand, you have satire, parody, mistakes, and misunderstandings. There’s no intention to deceive. There’s no malintent. It’s a misunderstanding or it’s meant to be a joke. This describes one end of the scale of misinformation.

On the other hand, there is information that is literally manipulated and fabricated with the full intention of dividing people and pushing emotional hot buttons (called disinformation).

Between these two extremes is misinformation that is false, misleading, and imposter content. 

So, misinformation is simply false information. It could be a mistake or misunderstood. It could be misrepresented (on purpose or not). Or it could be pure “science fiction” presented as though it’s true. 

According to the Lancet, examples of  COVID-19 misinformation include

  • Trivialising the risks of COVID-19
  • Equating COVID-19 with seasonal influenza
  • Questioning the effectiveness of mitigation and control measures (eg, the use of masks)
  • Promoting unproven treatments (eg, hydroxychloroquine)
  • Contradicting public health experts (even those from their own administrations)
  • Politicising the vaccine development that is essential to the ultimate control of the pandemic

This kind of misinformation can literally lead to lives lost because if enough people don’t take a public health communicable disease seriously, it spreads and can seriously harm thousands of people.

Graffiti saying "all we need is more likes"

Misinformation tries to take advantage of you

 

Money. Some people come up with amazingly creative ways to get you to part with yours. They are playing on legitimate concerns that a dangerous virus is spreading and we all want to protect our families. #Unethical

There are some very *interesting* products for sale that claim to ward off COVID-19. And, if these aren’t based in good research, they’re misinformation. For example, I’ve seen a little battery-operated device that you can hang around your neck on a string that claims to protect you and the air you breathe from the virus.

Newsflash: There are several ways to test whether something like this works IRL. At the very least (and this would only be a weak study or small puzzle piece) they would need to put a device in a room with a known, measured amount of virus and take several measurements over time to see if the viral load is decreasing. This would need to be done several times in several different environments and be compared with the viral load in those same rooms/environtments with no device (this would be the “control”). This type of study is called a mechanistic (“lab”) test because it’s not being tested in real people in real life. Therefore, you absolutely cannot base clinical efficacy (e.g., it protects you from the virus) from this. (Even if the viral load decreased with the device more than without it, that doesn’t mean that it can prevent the virus from spreading from one person to another if they were in that room/environment).

This is a tiny peek into why products that claim to prevent, treat, or cure diseases need to conduct several high-quality studies and then be reviewed/approved by regulatory bodies like Health Canada or the FDA.

Is this a perfect system? Hellz, no! But it most certainly is the best one we have so far and we should be constantly looking at how to improve product review/approval and prevent false health claims.

When it comes to spotting misinformation the rule still applies: If it sounds too good to be true, it probably is!

But you don’t need to be a scientist or fact-check everything that you see online. This study gives us some insight into how to figure it out for ourselves. And it all comes down to taking that “hot second”!

typewriter with fake news on paper

Now that we know what misinformation is, how do we recognize it?

 

What if we simply consider whether something is true or not before sharing it online?

This study found 15 Facebook posts about COVID-19 that were true (information) and another 15 that were false (misinformation). Then, the researchers did two studies that each recruited over 850 people.

Study 1 (accuracy versus sharing intention) – 853 people randomly split into two groups

FUN FACT: What makes this a randomized controlled trial (RCT) is that people were randomly assigned into one of two groups: the “control” group (the “accuracy” group) or the “experimental” group (the “sharing intention” group).

They showed people in the “accuracy” (control) group those 15 fake and 15 true social media posts relating to COVID-19 in random order and asked participants the simple question,

“To the best of your knowledge, is this claim in the above headline, correct? Yes or no.”

And with that simple question and a “hot second,” many people figured out which were fake and which were true.

People are smart and most people don’t want to share misinformation. They literally need a small prompt to get them to nudge them to think about accuracy.

Then in the other group, the “sharing intention” group, they wanted to know whether participants intended to share that post or not. So they asked them,

“Would you consider sharing this story online, for example, through Facebook or Twitter? Yes or no.”

The researchers found that people asked the accuracy question were correct more often than people who weren’t asked about accuracy (just whether they’d share or not). In other words, people are not taking the step to think about accuracy before intending to share a post or not. The sharing intention didn’t first discern whether the headline was accurate or not.

Study 1 established that people do not seem to readily consider accuracy when deciding what to share on social media. In Study 2, we tested an intervention in which participants were subtly induced to consider accuracy when making sharing decisions.

Study 2 (accuracy before sharing intention) – 856 *new* people randomly split into two groups

This second study did the same things as the sharing intention in study 1 (this was the control for study 2):

“Would you consider sharing this story online, for example, through Facebook or Twitter? Yes or no.”

They took the other group one step farther by first asking participants to rate the accuracy of one politically-neutral non-COVID-19 headline. (So, they’re being nudged to consider accuracy). Then, they got the same social media posts as in study 1 and were asked:

“If you were to see the above on social media, how likely would you be to share it?” which they answered on a 6-point scale from 1 (extremely unlikely) to 6 (extremely likely).

And here’s what’s even more interesting. Participants had different sharing intentions when they were 1) first nudged to think of accuracy and 2) instead of yes/no they had to rate the likelihood that they’d share the post.

Participants were more likely to share true headlines relative to false headlines after they rated the accuracy of a single non-COVID-related headline.

And best of all,

Sharing discernment (the difference in sharing likelihood of true relative to false headlines) was 2.8 times higher in the treatment condition compared with the control condition.

This minimal, content-neutral intervention nearly tripled participants’ level of discernment between sharing true and sharing false headlines.

 

examples of social media misinformation about the coronavirus

How to stop the spread of misinformation

 

What does this study contribute to stopping the spread of misinformation?

Our interpretation of the treatment effect is that the accuracy nudge makes participants more likely to consider accuracy when deciding whether to share.

In other words, simply nudging people to think about accuracy makes them more likely to share true information rather than misinformation.

This suggests that the accuracy nudge, as we hypothesized, increased people’s attention to whether the headlines seem true or not when they decided what to share.

Why does this happen? Why are people more likely to share true information if they’re nudged to think about accuracy first? The researchers believe that,

inattention plays an important role in the sharing of misinformation online.

So this is great news! People are smart. They don’t want to share false information. They just need a simple and subtle reminder to consider accuracy before hitting that “share” button.

It’s a simple way to curb the spread of misinformation.

Is it the one thing that will solve the problem? Nope! But, it looks like it can be part of the solution. (Just like how masks aren’t 100 percent effective, but there’s evidence they are part of the solution.)

FUN FACT ABOUT RECRUITING PARTICIPANTS FOR A STUDY: One thing to note about studies is they didn’t recruit 853 + 856 people. They recruited over eleven hundred people for each study. And of all those people recruited, 853 + 856 participated in the study. This is another complexity in doing scientific research is that you need to recruit more people than you need. That’s because some people are not going to be the right people (they have some criteria that excludes them); some will drop out, etc.

One of the big implications of a study like this is that on social media is that we can try nudging people to think about accuracy before the share button. Maybe have a little, “Do you think this is accurate?” button that they just have to think about it first. This could possibly be one piece of the puzzle to help reduce misinformation. Of course, this hypothesis should be tested, but you can apply it and see how it works.

Again, it’s not black or white. This one thing isn’t going to end all of the misinformation, but it is one piece of the puzzle to get people to simply consider accuracy before sharing.

 

How strong was this study on misinformation?

 

This study was an actual experiment—they asked people to do something and they measured it. They didn’t just send out surveys which would be observational. They randomly split hundreds of people into groups and had each group do something (one group was the “control” and the other was the “treatment”). That’s an experiment. 

So when it comes to looking at the strength of this research, we use a scale from one to seven about rating the methodology–how strong is this type of study? And this one, because it was a randomized controlled trial, is actually rated as six out of seven.

Randomized = They recruited people and randomly put them into groups. Neither the researchers nor the participants chose who was in which group.

Controlled = There was a control group for both study 1 and 2. In study 1 the control was just asked: “Do you think this is accurate?”  And the experimental group was asked whether they’d share the post (yes/no). For study 2 the control group was asked: “Would you consider sharing this story online (for example, through Facebook or Twitter?)” (yes/no)” and the experimental group was first asked to rate the accuracy of a politically-neutral non-COVID-19 post and then they were asked to rate the likelihood (from 1-6) that they would share it.

Funding bias = In terms of funding, there were no conflicts of interest declared by any of the researchers. And the study was funded by a whole bunch of foundations in the US, as well as the Canadian Institute of Health Research and Social Sciences and Humanities Research Council of Canada.

 

people trying to stop the spread of misinformation

What can you do?

 

I think the takeaway is really just recognizing that social media doesn’t have, “Do you think this is accurate?” buttons before you can impulsively click the share button. If we just literally spend a hot second to ask ourselves, “Is this accurate?” before sharing it, we can actually increase the accuracy of the information on the Internet.

How cool is that?

Just take that hot second and we’ll all be better off. 

This research is already being used by Media Smarts for their Check First, Share After campaign.

Don’t get me wrong, fact-checking is great! It’s even better to fact check with credible information before sharing. But, if you don’t have time to do it and you want to share something, first ask yourself if that seems accurate or not (is it clickbaity or sensational?).

Take that hot second before deciding to share a post. 

 

Let’s play some games!

 

Why not make spotting this stuff online fun? Here are a couple of games I came across to practise spotting trolls and false headlines.

Spot the troll

Trolls and bots are getting pretty savvy! They’re learning from all of the engagement and psychological profiles of their audience to become less obviously trollish and more humanistically believable.

Get Bad News

In this online game, you are in the shoes of a deliberate spreader of misinformation. Your goal is to get people to react and engage with your content and grow your social media followers. There’s even a junior version for 8-11 year olds:

Conclusion

 

Misinformation is false information, whether it’s a joke or deliberately fabricated. In the past few years, it’s impacted elections and people’s health. We can all do our part to recognize and stop the spread so that we can turn the tide of the pandemic as quickly as possible.

 

Signing off and toasting: To recognizing and stopping the spread of misinformation one social media share at a time!

 

Over to you

 

Have you seen some creative pieces of science fiction on social media posing as the truth? How did you recognize it? What do you think about these games to identify trolls and misinformation? Who else do you think needs to know this stuff?

I’d love to know in the comments below!

 

References

 

Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention. Psychological science31(7), 770–780. https://doi.org/10.1177/0956797620939054
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7366427/

course fake vs real health news

 

Done-for-you health articles

Featured products for your credible health blog:

 

3,746 words – 19 scientific references

Add to Cart
Click here for preview

Purchase

2,673 words – 54 scientific references

Click here for preview

I'm Leesa Klich, MSc., R.H.N.
Health writer – Blogging expert – Research nerd.

I help health and wellness professionals build their authority with scientific health content. They want to stand out in the crowded, often unqualified, market of entrepreneurs. I help them establish trust with their audiences, add credibility to their services, and save them a ton of time so they don’t have to do the research or writing themselves. To work with me, click here.

Leave a Reply

Your email address will not be published. Required fields are marked *