The participants were asked about their perceived accuracy of the Tweets, their personal beliefs and subjective attitudes about the COVID-19 immunization effort, and basic demographic information (the survey was anonymous and the participants were compensated with the standard rate for participation). In total, 319 users responded to our survey on Amazon Mechanical Turk and were randomly assigned into one of six groups for exposure to: (1) misleading Tweet with a contextual tag (2) misleading Tweet without a contextual tag (3) misleading Tweet with an interstitial cover (4) verified Tweet (5) verified Tweet with a contextual tag and (6) verified Tweet with an interstitial cover. Therefore, we conducted a study to test the effectiveness of both interstitial covers and contextual tags with users of the Twitter social media platform using COVID-19 vaccination content. However, both variants of misinformation warnings have not been tested in the context of social media nor have been tested respective to massive and developing misinformation theme such as the COVID-19 pandemic. It has been found that the interstitial warnings were effective in countering misinformation but not the contextual tags when applied to statistically incorrect information and false interpretation of local news events ( Kaiser et al., 2021). The warnings usually come in two main forms: (i) interstitial covers which obscure the misleading content and require users to click through to see the information and contextual tags which appear under the content and do not interrupt the user or compel action ( Kaiser et al., 2021). convince people to believe in the discredited misinformation even more, not less, as long as it is aligned with their preexisting beliefs on morally-charged topics ( Clayton, Blair, Busam, Forstner, Glance, Green, Kawata, Kovvuri, Martin, Morgan, et al., 2019, Thorson, 2016). However, there is no evidence that these warnings are effective, and in fact, an early investigation suggests that exposure to these warnings creates “belief echoes” i.e. The supposed aim of these warnings is to reduce exposure to misleading or harmful information that could “incite calls to action and cause widespread panic, social unrest or disorder” ( Roth and Pickles, 2020). According to Twitter, they are relying on their team and internal systems to monitor COVID-19 content for false or misleading information that is not corroborated by public health authorities or subject matter experts. Twitter did not begin similar initiatives until 2020, when, in late March, the platform began issuing warnings on Tweets deemed as spreading misinformation related to the COVID-19 pandemic ( Roth and Pickles, 2020). The goal of these initiatives was presumably to minimize the probability that readers will believe the fake information. About a year later, Facebook started adding fact-checks under potentially misleading stories ( Smith, 2017). In 2016, when “fake news” gained enormous popularity, Facebook started adding tags that say “disputed” on stories that were debunked by fact-checkers ( Mosseri, 2016). Surprisingly, we found that the belief echoes are strong enough to preclude adult Twitter users to receive the COVID-19 vaccine regardless of their education level. These “belief echoes” manifested as skepticism of adequate COVID-19 immunization particularly among Republicans and Independents as well as female Twitter users. We found that such “belief echoes” do exist among Twitter users in relationship to the perceived safety and efficacy of the COVID-19 vaccine as well as the vaccination hesitancy for themselves and their children. Soft moderation is known to create so-called ”belief echoes” where the warnings echo back, instead of dispelling, preexisting beliefs about morally-charged topics. The results suggest that the interstitial covers work, but not the contextual tags, in reducing the perceived accuracy of COVID-19 misinformation. We conducted a 319-participants study with both verified and misleading Tweets covered or tagged with the COVID-19 misinformation warnings to investigate how Twitter users perceive the accuracy of COVID-19 vaccine content on Twitter. This form of soft moderation comes in two forms: as an interstitial cover before the Tweet is displayed to the user or as a contextual tag displayed below the Tweet. Twitter, prompted by the rapid spread of alternative narratives, started actively warning users about the spread of COVID-19 misinformation.
0 Comments
Leave a Reply. |