Coronavirus misinformation is beating truth to the punch – hitting the public more quickly and directly. This “infodemic” anchors readers to its core messages and makes dislodging its falsehoods all the more difficult.
The consequences of misinformation are also heightened in a pandemic context, when accurate information is critical to ensuring cooperation with public health measures and acting on falsehoods can rapidly endanger countless lives. Governments trying to compete are faced with two (non-exclusive) options: (1) attempting to get out accurate information first and (2) employing strategies to evict misinformation that’s already filled the gaps before they could.
Back in 2019, many of us had not even heard the word “coronavirus.” Now we’re being taught to make facemasks out of pillow cases, discovering that the percentage of alcohol in hand sanitizer is actually important, and having high school algebra flashbacks as we learn about flattening the curve.
What does all of this new information mean in terms of the infodemic? Anchoring and first impression bias are cognitive biases that are most prevalent when people deal with new concepts, and which cause us to overvalue the first piece of information we receive, preventing us from adjusting sufficiently when we learn of additional information.
Because information about coronavirus is new to most of us, these biases indicate that the first things we hear will likely have an outsized impact. This gives viral, quick-spreading content – regardless of its truth – a serious advantage.
When many crucial questions remain unanswered, misinformation such as the now infamous “hold your breath and you can tell if you’ve got coronavirus” myth steps into the informational gap. Confirmation bias subsequently causes us to favor information supporting our initial impression (even if false) and discount anything that challenges it.
Governments should therefore want to ensure their information reaches people first. However, the requirement to provide accurate information has a hobbling effect: such information takes time to produce, gather, and verify. On the other hand, misinformation can be created instantly and then disseminated to the public through platforms like WhatsApp, text chains, and Facebook. These false messages and posts are often self-perpetuating: they encourage readers to share, either explicitly, or just by having juicy and algorithmically-attractive content, which amplifies and accelerates the spread of misinformation.
Governments may not be able to significantly increase the rate at which they produce information. However, they can spread the information they have faster.
For example, while the Wireless Emergency Alert system (WEA) gives U.S. officials the capacity to send geographically targeted texts, it has not yet been used widely to share pandemic-related information. Using WEA could help disseminate information more swiftly and directly.
Government agencies can also recruit both the public and public figures to help spread important messages, something NY Governor Andrew Cuomo has done well. For example, Cuomo recently engaged Jennifer Lopez and A-Rod on Twitter to help spread a “stay at home” message to the public, reaching a far broader audience than he would have otherwise (compare Lopez’s 45.2 million followers with the Governor’s 2 million).
These strategies might also help with the goal of dislodging misinformation that has already entered public consciousness. Having more people spread a government’s message capitalizes on the fact that repeated exposure to information leads to increased acceptance. Further, studies have shown that people’s trust in a given assessment of risk is based more on how much they believe the source has their best interests at heart than on the source’s perceived expertise. Hearing accurate information from an old teacher, favorite cousin, or spiritual leader can help people to accept that information over duplicitous claims.
Nancy Fairbank is a 2L at Harvard Law School.