Health New Media Res > Volume 8(2); 2024 > Article
Lim, Zhang, Kim, and Lee: Support for censorship and corrective action against social media misinformation about COVID-19 vaccines: comparisons across age, gender, and racial/ethnic groups

Abstract

This study investigated how an individual’s presumed influence of COVID-19 vaccine misinformation on others affects their intention to support censorship and engage in corrective measures against social media misinformation. A U.S. national survey was conducted in March 2021, using a random sample of 1,030 respondents from the U.S. national panel of Qualtrics. The results supported the serial mediation hypothesis in which the perceived exposure of others to misinformation led to corrective actions through presumed influence on others and support for censorship. The results also showed that men tended to perceive a stronger presumed influence resulting from others’ exposure to misinformation and greater intentions to take corrective actions than women. Furthermore, White respondents exhibited a stronger inclination to support censorship regarding misinformation due to the presumed influence of such information on others.

Introduction

During the COVID-19 pandemic, the rampant spread of misinformation on social media has become a major concern for health authorities and healthcare providers. Government health agencies and social media platforms are stepping up their efforts to fight this problem. In the United States, lawmakers have proposed legislation to hold social media companies responsible for spreading false vaccine information (McCabe, 2021). Simultaneously, major social media platforms are taking voluntary measures to counter false claims about COVID-19 vaccines, such as removing misinformation and banning accounts of known anti-vaccine activists.
However, concerns have arisen about social media platforms’ ability to effectively moderate the constant flow of misinformation within users’ decentralized networks (Koo et al., 2021). Researchers increasingly emphasize the importance of involving regular social media users in combating misinformation (Barnidge & Rojas, 2014; Koo et al., 2021). Two approaches to countering misinformation by social media users have garnered significant attention.
The first approach involves supporting government and social media platform censorship. As it touches upon constitutionally protected freedom of speech, opinions on this regulatory approach vary among individuals from diverse demographics and socioeconomic statuses (Lazer et al., 2018). The second approach involves taking corrective actions against misinformation on social media, such as reporting to the platform and leaving comments to refute false information (Luo & Cheng, 2021; Wang & Kim, 2020; Sun et al., 2022). These corrective actions by social media users are effective to combat misinformation because they are visible to others within their interpersonal connections on social networking sites. This increased visibility enhances the likelihood of raising awareness through engagement behaviors like liking and sharing, influencing others to take similar corrective actions.
Grounded in the influence of presumed influence (IPI) theory (Gunther and Storey, 2003), this study aims to investigate how individuals, influenced by their perceived others’ exposure to social media misinformation, differ in their behavioral intentions to support censorship and engage in corrective actions against health misinformation. Despite extensive research on IPI, only a limited number of studies have explored the impact of IPI on intentions to counter the spread of health misinformation. Moreover, some of the existing studies have primarily focused on examining IPI by considering only its theoretical components, with limited exploration of its effects based on individual characteristics. This study places particular emphasis on examining the potential moderating role of key demographic variables, such as gender, age, and race/ethnicity, in predicting perceived influence on others and engagement in restrictive and rectifying behaviors. We believe these findings will offer valuable insights for policy development and implementation by both governmental bodies and social media platforms.

Theoretical Background

The Influence of Presumed Influence

As Tewksbury et al. (2004, p. 140) state, “how one responds to a message depends largely on what the message is thought to do to others.” The Influence of Presumed Influence (IPI) (Gunther & Storey, 2003) is a mass communication theory that explains the indirect effects of persuasive media messages on people’s attitudes and behaviors. Expanding on the theory of the third-person effect (TPE), IPI provides a theoretical basis for the effect of individuals’ presumed others’ exposure to persuasive media messages on their own attitudes and behaviors through their presumed influence of such messages on others. The IPI theory assumes that media content can have a dual impact on individuals’ perceptions and behaviors. This impact of persuasive messages occurs through the messages’ direct shaping of one’s presumption about others’ exposure to the message based on their own exposure and the indirect influence on one’s own behavior as they align their behaviors with the presumed influence on others (Cho et al., 2021).

Perceived Exposure and Presumed Influence of Misinformation on Others

IPI posits that perceptions of others’ exposure play a crucial role in shaping individuals’ views regarding media’ s impact on others. This relationship is rooted in the persuasive press inference hypothesis and the persuasive reach presumption (Gunther, 1998). These hypotheses state that individuals form opinions about public views by interpreting media content and believe others consume the same media (Cho et al., 2021). Simply put, when individuals engage with media content and formulate impressions thereof, they frequently make the assumption that this content mirrors the exposure of the broader public and holds significant reach. Consequently, they develop certain expectations concerning others’ exposure to the content and the extent to which others are influenced by it (Gunther & Storey, 2003). The presumed exposure by others functions as an intermediary factor between an individual’s own exposure and their presumed influence on others (Cho et al., 2021; Shen & Huggins, 2013).
Several studies (e.g., Eveland et al., 1999; Hong, 2023; Paek et al., 2011; Shen and Huggins, 2013) have examined the correlation between perceived exposure by others and presumed influence on others within both mass media and social media contexts. Eveland et al. (1999) demonstrated that the perceived likelihood of exposure to socially undesirable content, such as violent and misogynic rap music, strongly predicted presumed influence on others. They suggested that this causal relationship might be linked to the naive, powerful media effect model that other people will be strongly influenced by negative media content. In testing the potential question order effects in IPI studies, Shen and Huggins’s (2013) study indicated that when self-related questions are asked before other-related questions in the questionnaire, a positive correlation can be observed between perceived exposure of others and the presumed influence on them. They confirmed that IPI worked in a causal chain: self-exposure → other-exposure → presumed influence on others → behavior.
This relationship between self-exposure and other-exposure receives mixed findings in the realm of social media. A study conducted by Cho et al. (2021) concerning E-cigarette commercials on YouTube indicated a strong positive association between presumed exposure by others and the perceived influence on others. However, Lim and Golan (2011) did not find support for their hypothesis that participants would perceive a greater influence of the political parody video on others when the video was assumed to have a higher likelihood of exposure. This divergence might be attributed to their treatment of the perceived likelihood of exposure as a dichotomous variable and their comparison of two exposure conditions—high versus low exposure. Based on the IPI’s theoretical premise and related research, we propose the following hypothesis about the relationship between perceived others’ exposure and presumed influence on others.
  • H1. Perceived exposure by others will be positively correlated with presumed influence on others.

Behavioral Outcomes: Support for Censorship and Corrective Actions

Another notable difference between IPI and TPE is its potential to expand beyond the traditional behavioral outcomes of the theory. While TPE has typically focused on two types of behavioral responses—restrictive or corrective actions (Sun et al., 2008), IPI has the capacity to broaden the scope of behavioral outcomes to include other persuasive effects. For example, Gunther and Storey (2003) found that a radio-based health campaign in Nepal indirectly influenced listeners’ attitudes and interactions with health workers through their presumed influence on others. Similarly, Hong (2023) examined the indirect impact of PrEP (pre-exposure prophylaxis) exposure on intentions to seek HIV risk information and promote PrEP education, mediated by presumed influence on others and presumed exposure by others. However, within the context of negative media messages, previous IPI research has typically focused on investigating restrictive and corrective actions.
Previous IPI research consistently shows a positive correlation between perceived media influence on others and support for censorship (Baek et al., 2019; Wang & Kim, 2020; Cohen & Weimann, 2008; Dohle et al., 2017; Tal-Or et al., 2010; Gunther, 1995; Hoffner et al., 1999; Riedl et al., 2022). For instance, Riedl et al. (2022) surveyed U.S. internet users and found that the perceived effect of social media content on others strongly predicted support for content moderation. Chung and Moon (2016) also found that presumed influence on others is a more robust predictor of attitudes toward media censorship than the self-other perceptual gap. Chung’s (2023) study, based on four national surveys in the U.S., UK, South Korea, and Mexico, found a positive association between the presumed influence of misinformation on others and support for content moderation, with the exception of the Mexican sample. Likewise, Luo and Cheng (2021), in their research involving American and Chinese populations, found that presumed influence of COVID-19 misinformation on others significantly increased public endorsement of social media censorship.
Based on this theoretical background, we propose the following hypothesis:
  • H2. Presumed influence on others positively correlates with support for censorship.

Recognizing the limitation of restrictive action in implementation, researchers of TPE and IPI have paid their special attention to the potential of corrective action. In particular, Rojas (2010) proposed the corrective action hypothesis, drawing insight from the anecdotal insight in Davison’s seminal work— “some counteraction would have to be taken” (Davison, 1983, p. 2). The corrective action hypothesis is particularly gaining importance in the age of social media, where the audience of a negative persuasive message is no longer a passive recipient and actively reacting to the message to counterbalance the potential negative impact that the message may have on other people and society. The corrective action hypothesis suggests an alternative avenue wherein individuals engage in reactive actions to voice their viewpoints and counterbalance the perceived media effects (Barnidge & Rojas, 2014; Lim & Golan, 2011; Rojas, 2010; Sun et al., 2022; Wang & Kim, 2020).
The corrective action hypothesis is underpinned by three core theoretical assumptions. Firstly, it posits that mass media are perceived as wielding a disproportionately strong influence over other people, as well as public opinion. Secondly, it suggests that certain individuals are predisposed to take corrective measures when they encounter media messages that they perceive as being biased or hostile towards their in group members. Thirdly, it asserts that those who perceive the media as exhibiting bias or hostility are driven by a strong intention to counterbalance the media’s influence through active engagement in both online and offline expressive behaviors (Barnidge & Rojas, 2014; Rojas, 2010; Wintterlin et al., 2021). Previous studies have endeavored to investigate a range of corrective measures against perceived media biases. These measures include activities such as engaging in political talk (Barnidge & Rojas, 2014), participating in social media activism (Lim & Golan, 2011), and conducting content moderation (Wang & Kim, 2020; Wintterlin et al., 2021).
The corrective action hypothesis has broadened the range of potential behavioral outcomes (Rojas, 2010; Barnidge & Rojas, 2014). A few studies have tested the corrective action hypothesis by applying it to biased, misleading, or hostile social media effects (Golan & Lim, 2016; Wang & Kim, 2020; Wintterlin et al., 2021). However, the empirical support for the corrective action hypothesis has not been consistently established across these studies. Barnidge and Rojas (2014) found that the presumed influence of biased media led people to engage more frequently in political talk in order to address perceived biases those perceived biases. Golan and Lim (2016) have examined the people’s intention to counterbalance the potential social influence of ISIS recruitment messages by engaging in social media activism. Wang and Kim (2020) examined people’s intention to request the platform’s moderation of uncivil comments. Additionally, Wintterlin and his colleagues (2021) examined social media users’ intention to correct misinformation or fight harmful content by using Facebook’s features, such as reporting to the platform or responding directly with comments or dislikes. However, they did not find supporting evidence for the corrective action hypothesis. Naab et al.’s (2021) research investigated intentions to take corrective actions against rude comments on a news story about refugees posted on a mock Facebook page of Germany’s Spiegel and failed to link the TPP of uncivil comments to corrective actions. Caution must be taken when interpreting the results, as the study was based on a single experiment without replication and lacks external validity due to the reaction to a single rude comment. Sun et al. (2022) found that individuals who perceive a high threat of misinformation influence on others experience emotions like anticipated guilt, which motivated them to correct the misinformation. We posit the following hypothesis based on previous studies that reported the direct effect of influence on others on corrective actions.
  • H3. Presumed influence on others positively correlates with the intention to engage in corrective actions.

As mentioned earlier, a couple of studies did not find statistically meaningful results for the corrective action hypothesis. Lim (2017), however, argued that the correlation tables of those studies showed positive correlations among TPP, censorship, and corrective actions. He further argued that the fact that the significant relations were no longer significant in the multivariate analysis implies full mediation through censorship. As suggested, he found the result that supported the mediation hypothesis in which TPP of misleading online advertising of cosmetic surgery led to corrective actions through support for regulation. Cheng and Luo reported the results that showed the significant indirect effect of TPP of misinformation about COVID-19 on corrective actions through support for government regulation. However, a closer look at their measure of corrective actions indicates that their measure of corrective actions is more related to attitudes toward media literacy than corrective actions implied by Davison (1983) and Rojas (Rojas, 2010). Based on the theoretical assumption and supporting evidence from previous research (Lim, 2017), we predict as follows:
  • H4. The effect of presumed influence on others on corrective action will be mediated by support for censorship.

Role of Demographics in Presumed Influence on Others

Examining individuals’ demographic characteristics, such as age, gender, and ethnicity, is essential for understanding potential differences in the impacts of IPI across various groups. Different demographic segments can exhibit differing levels of susceptibility to specific forms of media influence or misinformation. By recognizing these distinctions, policymakers and public health professionals can more precisely tailor their interventions and allocate resources to effectively address the distinct needs and vulnerabilities of each group. Moreover, exploring the role of demographics in IPI could assist public health professionals in strategically allocating resources to combat misinformation. Additionally, social media platforms could enhance the effectiveness of their policies and incentives in addressing misinformation by considering the demographic composition of their user base.
While the impact of demographics on persuasive media effects has been acknowledged by scholars, these factors have mainly served as control variables within analyses (e.g., Tewksbury et al., 2004; Wang & Kim, 2020; Sun, 2022, Wei et al., 2017). Only a few scholars have attempted to examine these potential differences resulting from gender, age, and ethnicity (Loomba et al, 2021; Luo & Cheng, 2022; Lo & Wei, 2002, Jang & Kim, 2018; David et al., 2008; Riedl et al., 2022). For example, Lo and Wei (2002) examined the differences in the perceptual gap related to internal pornography between males and females. Their findings indicated that female students exhibited stronger intentions to support restricting pornography on the Internet. Regarding age, Riedl et al., (2022) found that age was positively related to support for social media content review, suggesting that older individuals tend to endorse restrictive measures on the media content. Luo and Cheng (2022) found a distinct pattern wherein age exhibited a positive correlation with censorship support while displaying a negative association with corrective actions. This outcome suggests that young people tend to oppose government intervention or censorship of COVID-19 misinformation. Instead, they seem to lean towards engaging in self-directed measures to address misinformation issues.
Based on the previous studies, we propose the following two research questions.
  • RQ1: How does the effect of perceived exposure on presumed influence on others differ by age, gender, and race/ethnicity?

  • RQ2: How does the effect of presumed influence on others on support for censorship and corrective actions differ by age, gender, and race/ethnicity?

Method

Survey respondents

Survey respondents were randomly selected from the U.S. national panel of Qualtrics, a leading survey management platform. Each respondent received $5 for completing the survey. IRB approval was obtained from a large private university in the United States. The survey was conducted in March 2021, about a year after the start of the COVID-19 pandemic, when J&J’s COVID-19 vaccine had just been rolled out, and some U.S. adults had received an initial dose of the Pfizer or Moderna vaccines. The final sample included 1,030 respondents: 60.4% were White, 18.7% Hispanic or Latino, 12.6% African American, and 6.3% Asian American. Female respondents made up 49.7%, and male respondents accounted for 49.2%. The mean age was 33.22 (SD = 10.47). Participants were divided into three age groups by generation: 27.5% were 18-24 years old (Generation Z), 46.0% were 25-40 years old (Millennials), and 26.5% were 41 years and older (Generation X and Boomers). Table 1 summarizes the demographic information. of the participants.

Measures

Before measuring the key variables of interest, we provided respondents with nine statements regarding COVID-19 vaccine-related misinformation and asked them to indicate the extent to which they believe each claim reflects what they truly believe on a 7-point Likert scale, anchored by “Very untrue of what I believe” (1) and “Very true of what I believe” (7). Samples of the claims include: COVID-19 vaccines cause autism; COVID-19 vaccines change people’s DNA; COVID-19 vaccines are designed to control a population for non-public health purposes. These statements contained false and misleading information about the COVID-19 vaccine, which was sourced from authoritative public health websites such as the WHO. Importantly, we refrained from informing respondents that these statements were actually misinformation. This approach was taken to prevent potential measurement errors, including social-desirability bias and the influence of leading questions. Table 2 presents all measures, means, SDs, and reliability for each measure used in the study. The composite score for each measure was calculated by averaging the total score of all items for that specific measure.
Perceived others’ exposure (OTHER-EXP). Respondents were asked, “Please estimate how often you think other people, in general, have encountered the previous claims regarding the COVID-19 vaccines from each of the following sources: Twitter, Facebook, YouTube, Digital Content Linked via other social media.” For each source, participants were asked to estimate the frequency of others’ exposure to the claims they read on a 7-point scale (1: Never, 7: Very often) (M = 4.48, SD = 1.67, α = .89).
Presumed influence on others (OTHER-INF). To measure respondents’ presumed influence of COVID-19 vaccine-related on others, we adapted five items from Lim et al.’s (2020) research. The respondents indicated their agreement with the four statements on a 7-point Likert scale (1: strongly disagree, 7: strongly agree) (M = 4.78, SD = 1.45, α = .89).
Support for Censorship (CENSOR). Respondents were asked to indicate how much they support social media platforms or lawmakers to regulate the claims they read regarding the COVID-19 vaccines. We adapted six items from Guo and Johnson’s (2020) study. All items were measured on a 7-point Likert scale (1: strongly disagree, 7: strongly agree) (M = 3.46, SD = 1.06, α = .91).
Corrective actions (CORRECT). Respondents were asked to report their willingness to engage in corrective actions regarding COVID-19 vaccine-related misinformation on social media. We adopted a five-item 7-point scale (1: unwilling at all 7: very willing) from Lim et al.’s (2020) research (M = 4.40, SD = 1.71, α = .93).

Results

Testing H1 to H4

The current study predicted that perceived others’ exposure would positively predict presumed influence on others (H1) and that presumed influence on others would positively predict support for censorship (H2) and corrective action (H3). We also proposed that the effect of presumed influence on others on corrective action would be mediated by support for censorship (H4) (Figure 1). To test these hypotheses, we performed a sequential mediation analysis using Mplus 8. Table 3 summarizes the results of the mediation analysis.

Answering RQ1 and RQ2

We proposed RQ1 and RQ2 to investigate potential variations in the impact of IPI across different demographic groups. To answer them, the gender variable was recoded into a dichotomous classification: male and female. Due to limited numbers, other gender identifications were treated as missing data. Similarly, the ethnicity groups were recoded as a dichotomous variable: White and non-White. Age was recoded into three groups: Generation Z (18-24 years old), Millennials (25-40 years old), and Generation X and older (41 years old and above). A multiple-group path analysis was employed to examine whether differences in the structural parameters across groups were statistically significant. Cross-group invariance was assessed through a comparison of two nested models using the Chi-square test: the baseline model wherein no constraints were specified and the constrained model where each path was constrained to be invariant for the comparison paths between the two groups. Table 4 displays the results of the nested model comparison.
The results indicated that several relationships were different by gender. First, the positive effect of perceived exposure by others on presumed influence was significantly stronger (p <.001) for males (β = .629, p <.001) than for females (β = .422, p <.001). Second, the differences in the positive effects of presumed influence on support for censorship for male (β = .564, p <.001) and female (β = .335, p <.001) was also significant (p <.001). Third, the positive effects of presumed influence on corrective action were significantly higher (p <.001) for males (β = .355, p <.001) than females (β = .137, p <.001).
Regarding the ethnicity groups, we focused on the comparison between White respondents and Non-White respondents. Our findings indicated that in comparison to Non-White respondents (β = .409, p <.001), the effect of presumed influence on censorship among White respondents (β = .478, p <.001) was significantly stronger (p =.020).
In examining three age groups-Gen Zers (18-24), Millennials (25-40), and Xers and older (41 and above), we found significant differences in the effects of exposure on presumed influence (p < 0.001). Specifically, the influence of perceived exposure on presumed influence was more pronounced for Millennials (β = .599, p <.001) than Gen Zers (β = .495, p <.001) and Xers (β = .432, p <.001).

Discussion

The Grounded in a theoretical premise of IPI, we proposed three hypotheses that assume the pathways of OTHER-EXP → OTHER-INF → BEHAVIOR (i.e., CENSOR, CORRECT). Our findings support the positive correlations among these variables suggested by Shen and Huggins (2013). We observed the large effect of OTHER-EXP on OTHER-INF and a positive direct effect of OTHER-INF on both CENSOR and CORRECT. The positive impact of OTHER-INF on CENSOR is well-compatible with the results of other studies (Cohen & Weimann, 2008; Rojas et al., 1996). In the context of misinformation-related censorship, our research can be summarized as follows: The more people perceive the influence of misinformation on others, the stronger their intention to support the platform’s banning of misinformation and to support legislation to stop its spread.
The current research also found a statistically significant indirect effect of OTHER-EXP on CORRECT through OTHER-INF and CENSOR. This result further confirms the mediating role of support for censorship in corrective action that has been found in previous studies (Cheng & Luo, 2021; Lim, 2017). Identifying the conditions that affect people’s corrective actions is a key to developing an effective communication strategy to encourage their voluntary engagement. The idea of inducing corrective actions through presumed influence on others is often implemented in persuasive messages that employ descriptive norms in campaigns for health prevention (Hong & Kim, 2020) and environmental conservation (Cialdini, 2003).
We also observed age-related effect on the relationship between OTHER-EXP and OTHER-INF. Specifically, the perception of other’s exposure to COVID-19 vaccine misinformation had a stronger impact on Millennials’ (25-40) presumed influence of misinformation on others than Gen Zers (18-24). This finding aligned with the outcomes observed by Martínez-Costa et al. (2023), which showed individuals’ tendency to underestimate the ability of others who are not in their same age group in identifying disinformation. In our study, respondents within the middle-aged category exhibited a higher level of bias. However, the presumed influence on others did not demonstrate age-related differential effects on individuals’ intention to support censorship and adopt corrective measures.
Regarding different racial/ethnic groups, our study found that the perceived influence of misinformation on others had a stronger impact on the intention of White respondents to endorse the censorship of misinformation compared to people of color. However, this perceived influence did not generate significantly different effects on White respondents’ intentions to take actions against misinformation compared to people of color. Furthermore, we did not observe a varying impact of OTHER-EXP on OTHER-INF across different racial/ethnic groups. These findings suggested that individuals from diverse race/ethnicity groups did not perceive others to be more susceptible to the influence of exposing to COVID-19 vaccine misinformation. However, White respondents were more inclined to support censorship regarding misinformation due to their presumptions about the influence of such information on others.
In summary, our findings demonstrated that the effects induced by the presumed influence vary significantly across crucial demographic variables, contributing to the existing literature. This finding supported Davison’s assertion that hypothesized influence on others is a complex phenomenon influenced by diverse psychological, personal, and situational factors. Prior efforts to explain these effects stemming from individual differences have relied on theoretical frameworks such as the self-enhancement bias (see Tal-Or, 2007) and overconfidence (see Martínez-Costa et al., 2023). Individuals often hold the belief that their abilities are better than average or others, or that they are better than they really are (Martínez-Costa et al., 2023). For instance, Martínez-Costa et al. (2023) introduced and investigated the “nobody-fools-me-perception,” delineating a cognitive bias characterized by overconfidence in one’s ability to detect disinformation. Likewise, Wang and Kim (2020) found that individuals perceived themselves as more adept than others in distinguishing true and false COVID-19 related statements. Our study also implied the presence of an overconfidence bias due to the mismatch between individuals’ perceptions and their reality. We performed a post-hoc assessment of COVID-19 vaccine-related knowledge based on ten yes-no questions. The results showed that women scored higher (M = 3.4, SD = 1.92) compared to men (M = 2.96, SD = 1.65, t =3.91, df =1017). While female respondents demonstrated a greater level of knowledge about COVID-19 vaccine than male respondents, the latter group still exhibited a stronger inclination to believe that exposure to COVID-19 vaccine misinformation significantly impacted others and consequently felt more inclined to take action against misinformation. Understanding the moderating roles of these demographic factors in the IPI effect can facilitate public health agencies and social media platforms to devise appropriate measures and policies to encourage specific populations to participate in combatting misinformation.

Limitations and Suggestions for Future Research

The current research has some merits of using nationally representative samples. To ensure representativeness, a random sample was drawn from the national panelists registered in Qualtrics. Furthermore, to minimize the social-desirability bias, this study avoided the use of the term “misinformation” in the questionnaire.
Despite all the efforts, this research still has some limitations. First, the IPI model proposed in this study did not consider other behavioral intentions (e.g., vaccine uptake intentions), which are important to health communication researchers. In addition, we only attempted to see if demographics of gender, age, and race/ethnicity would make differences for the proposed IPI model. Other factors, including prior vaccination experience, vaccine-related knowledge, perceived severity and susceptibility to disease, and emotional responses, may also influence individuals’ presumed influence on others by COVID-19 vaccine-related messages. Therefore, we recommend that future research expand the current IPI model to incorporate more factors and evaluate both vaccine uptake intentions and corrective actions.
Second, this research leaves an open question regarding the presumed influence of misinformation on others, as estimated by expert groups. We argue that understanding how experts perceive this influence is crucial for exploring the censorship-related aspects of the theory and for deriving meaningful policy implications. To address this gap, we recommend that future research investigate experts’ support for the censorship of vaccine misinformation on social media.
Third, we did not consider political or ideological affiliation, but respondents may have different perceptions of influences and behaviors toward the presented false claims depending on their political orientation.

Data Availability Statment

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Notes

Funding Information

This work was supported by the Syracuse University Collaboration for Unprecedented Success and Excellence (CUSE) grant program.

Figure 1
The Conceptual Model
hnmr-2024-00094f1.jpg
Table 1
Participant Demographic Information (n = 1,030)
Demographic Characteristics N %
Gender
 Male 507 49.2
 Female 512 49.7
 Other/prefer not to answer 11 1.1
Ethnicity
 White 622 60.4
 Hispanic or Latino 193 18.7
 African American 130 12.6
 Asian American 65 6.3
 Other 20 1.9
Education
 Less than high school 22 2.1
 High school graduate or equivalent 166 16.1
 Some college 152 14.8
 Trade/technical/vocational training 61 5.9
 Associate degree 120 11.7
 Bachelor’s degree 216 21.0
 Graduate degree 293 28.4
Income
 Under $ 25,000 143 13.9
 $ 25,000 - $ 29,999 82 8.0
 $ 30,000 - $ 34,999 45 4.4
 $ 35,000 - $ 39,999 46 4.5
 $ 40,000 - $ 49,999 65 6.3
 $ 50,000 - $ 59,999 82 8.0
 $ 60,000 - $84,999 131 12.7
 $ 85,000 - $ 99,999 86 8.3
 Over $ 100,000 350 34.0
Age
 18 to 24 years old 283 27.5
 25 to 40 years old 474 46.0
 41 years old to beyond 273 26.5
Mean = 33.22 SD = 10.47
Table 2
Means, standard deviations, and Reliability of Measures
Measures Mean SD Reliability α
Perceived Other Exposure (OTHER-EXP) 4.48 1.67 .89
OTHER-EXP1: Twitter
OTHER-EXP2: Facebook
OTHER-EXP3: YouTube
OTHER-EXP4: Digital content linked via other social media

Presumed Other Influence (OTHER-INF) 4.78 1.45 .89
OTHER-INF1: Other people, in general, are influenced by such claims about COVID-19 vaccines.
OTHER-INF2: Other people in my age group are influenced by such claims about COVID-19 vaccines.
OTHER-INF3: Such claims about COVID-19 vaccines have a powerful impact on other people.
OTHER-INF4: Other people are likely to be persuaded by such claims about COVID-19 vaccines.

Support for Censorship (CENSOR): The extent to which you would support social media platforms or lawmakers doing each of the following: 3.46 1.06 .91
CENSOR1: Censor such claims that you read regarding the COVID-19 vaccines
CENSOR2: Support social media platforms setting up more functions to let users block such claims.
CENSOR3: Support social media platforms setting up more functions to let users report such claims.
CENSOR4: Support social media platforms closing the accounts of individuals or groups who post such claims.
CENSOR5: Support the government passing a law to clean up such claims.
CENSOR6: Support the government’s passing a law to punish individual social media users for then spreading such claims.

Corrective Actions (CORRECT): 4.40 1.71 .93
CORRECT1: I would post on my social media to counter such claims regarding COVID-19 vaccines
CORRECT2: I would leave a comment on the social media posts that make such claims regarding COVID-19 vaccines to correct the misinformation
CORRECT3: I would post a blog entry to counter such false or misleading claims regarding COVID-19 vaccines.
CORRECT4: I would join online groups that combat such claims about COVID-19 vaccines.
CORRECT5: I would flag the posts of false claims for verification.
Table 3
The standardized coefficients for direct and indirect effects of the IPI model
Direct Paths Estimate S.E. Z
OTHER-EXP→OTHER-INF .54 .03 19.76***
OTHER-INF→CENSOR .46 .03 15.96***
OTHER-INF→CORRECT .24 .03 8.15***
CENSOR→CORRECT .57 .03 18.85***

Mediation analysis

Indirect Paths Estimate S.E. Z 95% CI

OTHER-INF→CORRECT
Indirect Effect .26 .02 12.64*** .22 .31

OTHER-EXF→CORRECT
Total Effect .27 .02 11.30*** .23 .32
Total Indirect .27 .02 11.30*** .23 .32
OTHER-EXP→OTHER-INF→CORRECT .13 .02 7.22*** .10 .17
OTHER-EXP→OTHER-INF→CENSOR—CORRECT .14 .02 9.45*** .12 .17

Note: OTHER-EXP: perceived exposure by others, OTHER-INF: presumed influence on other,

CENSOR: support for censorship, CORRECT: corrective actions.

***p < .001,

**p < .01,

*p< .05

Table 4
Results of Path Comparison across different demographic groups
Gender Ethnicity Age

β Δχ2 β Δχ2 β Δχ2

Male Female Male vs. Female White Non-white White vs. Non-white 18-24 25-40 40~ 18-24 vs. 25-40 vs.40~
OTHER-EXP→OTHER-INF .629*** .422*** 20.58*** .533*** .495*** 1.80 .495*** .599*** .432*** 20.73***
OTHER-INF→CENSOR .564*** .335*** 15.62*** .478*** .409*** 7.77** .417*** .454*** .469*** 4.44
OTHER-INF→CORRECT .355*** .137*** 20.74*** .212*** .222*** 0.09 .211*** .246*** .207*** 1.23

Note: OTHER-EXP: perceived exposure by others, OTHER-INF: presumed influence on other,

CENSOR: support for censorship, CORRECT: corrective actions.

***p < .001,

**p < .01,

*p< .05

References

Armitage, R. (2021). Online ‘anti-vax’ campaigns and covid-19: Censorship is not the solution. Public Health, 190, e29-e30. https://doi.org/10.1016/j.puhe.2020.12.005
crossref pmid pmc
Baek, Y. M., Kang, H., & Kim, S. (2019). Fake news should be regulated because it influences both “others” and “me”. Mass Communication and Society, 22(3), 301-323. https://doi.org/10.1080/15205436.2018.1562076
crossref
Barnidge, M., & Rojas, H. (2014). Hostile media perceptions, presumed media influence, and political talk: Expanding the corrective action hypothesis. Int J Public Opin Res, 26(2), 135-156. https://doi.org/10.1093/ijpor/edt032
crossref
Broniatowski, D. A., Dredze, M., & Ayers, J. W. -2021. “First do no harm”: Effective communication about covid-19 vaccines. American Journal of Public Health, 111(6), 1055-1057. https://doi.org/10.2105/ajph.2021.306288
crossref pmid pmc
Cheng, Y., & Luo, Y. (2021). The presumed influence of digital misinformation: Examining us public’s support for governmental restrictions versus corrective action in the covid-19 pandemic. Online Information Review, 45(4), 834-852. https://doi.org/10.1108/OIR-08-2020-0386
crossref
Chia, S. C. (2006). How peers mediate media influence on adolescents’ sexual attitudes and sexual behavior. Journal of Communication, 56(3), 585-606. https://doi.org/10.1111/j.14602466.2006.00302.x
crossref
Cho, H., Shen, Lijiang, & Peng, Lulu (2021). Examining and extending the influence of presumed influence hypothesis in social media. Media Psychology, 24(3), 413-435. https://doi.org/10.1080/15213269.2020.1729812
crossref pmid pmc
Chung, M. (2023). What’s in the black box? How algorithmic knowledge promotes corrective and restrictive actions to counter misinformation in the USA, the UK, South Korea and Mexico. Internet Research, 33(5), 1971-1989. https://doi.org/10.1108/INTR-07-2022-0578
crossref
Chung, S., & Moon, S. I. (2016). Is the third-person effect real? A critical examination of rationales, testing methods, and previous findings of the third-person effect on censorship attitudes. Human Communication Ressearch, 42, 312-337.
crossref
Cialdini, R. B. (2003). Crafting normative messages to protect the environment. Current Directions in Psychological Science, 12(4), 105-109. https://doi.org/10.1111/1467-8721.01242
crossref
Cohen, J., & Weimann, G. (2008). Who’s afraid of reality shows?: Exploring the effects of perceived influence of reality shows and the concern over their social effects on willingness to censor. Communication Research, 35(3), 382-397. https://doi.org/10.1177/0093650208315964
crossref
David, P., Morrison, G., Johnson, M. A., & Ross, F. (2002). Body image, race, and fashion models: Social distance and social identification in third-person effects. Communication Research, 29(3), 270-294.
crossref pdf
Davison, W. P. (1983). The third-person effect in communication. Public Opinion Quarterly, 47(1), 1-15. https://doi.org/10.1086/268763
crossref
Davison, W. P. (1996). The third-person effect revisited. International Journal of Public Opinion Research, 8(2), 113-119. https://doi.org/10.1093/ijpor/8.2.113
crossref
Dohle, M., Bernhard, U., & Kelm, O. (2017). Presumed media influences and demands for restrictions: Using panel data to examine the causal direction. Mass Communication and Society, 20(5), 595-613. https://doi.org/10.1080/15205436.2017.1303072
crossref
Ecker, U. K. H., Lewandowsky, S., Fenton, O., & Martin, K. (2014). Do people keep believing because they want to? Preexisting attitudes and the continued influence of misinformation. Memory & Cognition, 42(2), 292-304. https://doi.org/10.3758/s13421-013-0358-x
crossref pmid
Eveland, W. P. Jr, & McLeod, D. M. (1999). The effect of social desirability on perceived media impact: Implications for third-person perceptions. International Journal of Public Opinion Research, 11(4), 315-333. https://doi.org/10.1093/ijpor/11.4.315
crossref
Eveland, W. P. Jr, Nathanson, A. I., Detenber, B. H., & Mcleod, D. M. (1999). Rethinking the social distance corollary: Perceived likelihood of exposure and the third-person perception. Communication Research, 26(3), 275-302. https://doi.org/10.1177/009365099026003001
crossref
Fuller, C. M., Simmering, M. J., Atinc, G., Atinc, Y., & Babin, B. J. (2016). Common methods variance detection in business research. Journal of Business Research, 69(8), 3192-3198. https://doi.org/10.1016/j.jbusres.2015.12.008
crossref
Golan, G. J., & Lim, J. S. (2016). Third-person effect of isis’s recruitment propaganda: Online political self-efficacy and social media activism [third-person effect, terrorism, propaganda, social media, recruitment, militant Islamism, social distance]. International Journal of Communication, 10(1), 4681-4701. http://ijoc.org/index.php/ijoc/article/view/5551/1792
Gunther, A. C. (1995). Overrating the X-rating: The third-person perception and support for censorship of pornography. Journal of Communication, 45, 27-38.
crossref
Gunther, A. C. (1998). The persuasive press inference: Effects of mass media on perceived public opinion. Communication Research, 25, 481-499. https://doi.org/10.1177/009365098025005002
crossref
Gunther, A. C., & Storey, J. D. (2003). The influence of presumed influence. Journal of Communication, 53(2), 199-215. https://doi.org/10.1111/j.1460-2466.2003.tb02586.x
crossref
Guo, L., & Johnson, B. G. (2020). Third-person effect and hate speech censorship on Facebook. Social Media + Society, 6(2), 2056305120923003. https://doi.org/10.1177/2056305120923003
crossref
Harff, D., Bollen, C., & Schmuck, D. (2022). Responses to social media influencers’ misinformation about COVID-19: A pre-registered multiple-exposure experiment. Media Psychology, 25(6), 831-850. https://doi.org/10.1080/15213269.2022.2080711
crossref
Hoffner, C., Buchanan, M., Anderson, J. D., Hubbs, L. A., Kamigaki, S. K., Kowalczyk, L., Pastorek, A., Plotkin, R. S., & Silberg, K. J. Support for censorship of television violence: The role of the thirdperson effect and news exposure. Commun. Res, (1999). 26, 726-742.
Hong, Y., & Kim, S. (2020). Influence of presumed media influence for health prevention: How mass media indirectly promote health prevention behaviors through descriptive norms. Health Communication, 35(14), 1800-1810. https://doi.org/10.1080/10410236.2019.1663585
crossref pmid
Jang, S. M., & Kim, J. K. (2018). Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in human behavior, 80, 295-302.
crossref
Koo, A. Z.-X., Su, M.-H., Lee, S., Ahn, S.-Y., & Rojas, H. (2021). What motivates people to correct misinformation? Examining the effects of third-person perceptions and perceived norms. Journal of Broadcasting & Electronic Media, 65(1), 111-134. https://doi.org/10.1080/08838151.2021.1903896
crossref
Kozyreva, A., Herzog, S. M., Lewandowsky, S., Hertwig, R., Lorenz-Spreen, P., Leiser, M., & Reifler, J. (2023). Resolving content moderation dilemmas between free speech and harmful misinformation. Proceedings of the National Academy of Sciences, 120(7), e2210666120,
crossref pmid pmc
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094-1096. https://doi.org/10.1126/science.aao2998
crossref pmid
Lim, J. S. (2017). The third-person effect of online advertising of cosmetic surgery: A path model for predicting restrictive versus corrective actions. Journalism and Mass Communication Quarterly, 94(4), 972-993. https://doi.org/10.1177/1077699016687722
crossref
Lim, J. S., Chock, T. M., & Golan, G. J. (2020). Consumer perceptions of online advertising of weight loss products: The role of social norms and perceived deception. Journal of Marketing Communications, 26(2), 145-165. https://doi.org/10.1080/13527266.2018.1469543
crossref
Lim, J. S., & Golan, G. J. (2011). Social media activism in response to the influence of political parody videos on youtube. Communication Research, 38(5), 710-727. https://doi.org/10.1177/0093650211405649
crossref
, Lo, & Wei, Ran (2002). Third-Person Effect, Gender, and Pornography on the lnternet. Journal of Broadcasting & Electronic Media, 46(1), 13-33.
crossref
Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., & Larson, H. J. (2021). Measuring the impact of covid-19 vaccine misinformation on vaccination intent in the uk and USA. Nature Human Behaviour, 5(3), 337-348. https://doi.org/10.1038/s41562-021-01056-1
crossref pmid
Luo, Y., & Cheng, Y. The Presumed Influence of COVID-19 Misinformation on Social Media: Survey Research from Two Countries in the Global Health Crisis. International Journal of Environmental Research and Public Health, 18(11), 5505. https://doi.org/10.3390/ijerph18115505
crossref pmid
Martínez-Costa, M.-P., López-Pan, F., Buslón, N., & Salaverría, R. (2023). Nobody-fools-me perception: Influence of age and education on overconfidence about spotting disinformation. Journalism Practice, 17(10), 2084-2102. https://doi.org/10.1080/17512786.2022.2135128
crossref
McCabe, D. (2021, December 2). Lawmakers target big tech ‘amplification.’ what does that mean? The New York Times, B1.
McLeod, D. M., Eveland, W. P. Jr, & Nathanson, A. I. (1997). Support for censorship of violent and misogynic rap lyrics: An analysis of the third-person effect. Communication Research, 24(2), 153-174. https://doi.org/10.1177/009365097024002003
crossref
McNaghten, A. D., Brewer, N. T., Hung, M.-C., Lu, P.-J., Daskalakis, D., Abad, N., Kriss, J., Black, C., Wilhelm, E., Lee, J. T., Gundlapalli, A., Cleveland, J., Elam-Evans, L., Bonner, K., & Singleton, J. (2022). Covid-19 vaccination coverage and vaccine confidence by sexual orientation and gender identity - united states, august 29-october 30, 2021. MMWR. Morbidity and mortality weekly report, 71(5), 171-176. https://doi.org/10.15585/mmwr.mm7105a3
crossref pmid pmc
Naab, T. K., Naab, T., & Brandmeier, J. (2021). Uncivil user comments increase users’ intention to engage in corrective actions and their support for authoritative restrictive actions. Journalism & Mass Communication Quarterly, 98(2), 566-588. https://doi.org/10.1177/1077699019886586
crossref
Paek, H.-J., Gunther, A. C., McLeod, D. M., & Hove, T. (2011). How adolescents’ perceived media influence on peers affects smoking decisions. Journal of Consumer Affairs, 45(1), 123-146. https://doi.org/10.1111/j.1745-6606.2010.01195.x
crossref
Park, S. Y. (2005). The influence of presumed media influence on women’s desire to be thin. Communication Research, 32(5), 594-614. https://doi.org/10.1177/0093650205279350
crossref
Riedl, M. J., Whipple, Kelsey N., & Wallace, Ryan (2022). Antecedents of support for social media content moderation and platform regulation: the role of presumed effects on self and others. Information, Communication & Society, 25(11), 1632-1649. https://doi.org/10.1080/1369118X.2021.1874040
crossref
Rojas, H. (2010). “Corrective” actions in the public sphere: How perceptions of media and media effects shape political behaviors. International Journal of Public Opinion Research, 22(3), 343-363. https://doi.org/10.1093/ijpor/edq018
crossref
Rojas, H., Shah, D. V., & Faber, R. J. (1996). For the good of others: Censorship and the third-person effect. International Journal of Public Opinion Research, 8(2), 163-186. https://doi.org/10.1093/ijpor/8.2.163
crossref
Schraer, R. (2022). January 19; Should bad science be censored on social media? BBC News. https://www.bbc.com/news/technology-60036861
Shen, L., & Huggins, C. (2013). Testing the model of influence of presumed influence in a boundary condition: The impact of question order. Human Communication Research, 39(4), 470-491. https://doi.org/10.1111/hcre.12013
crossref
Sun, Y. (2022). Verification Upon Exposure to COVID-19 Misinformation: Predictors, Outcomes, and the Mediating Role of Verification. Science Communication, 44(3), 261-291. https://doi.org/10.1177/10755470221088927
crossref
Sun, Y., Oktavianus, J., Wang, S., & Lu, F. (2022). The role of influence of presumed influence and anticipated guilt in evoking social correction of covid-19 misinformation. Health Communication, 37(11), 1368-1377. https://doi.org/10.1080/10410236.2021.1888452
crossref pmid
Sun, Y., Shen, L., & Pan, Z. (2008). On the behavioral component of the third-person effect. Communication Research, 35(2), 257-278. https://doi.org/10.1177/0093650207313167
crossref
Tal-Or, N. (2007). Age and third-person perception in response to positive product advertisements. Mass Communication and Society, 10(4), 403-422. https://doi.org/10.1080/15205430701580557
crossref
Tal-Or, N., Cohen, J., Tsfati, Y., & Gunther, A. C. (2010). Testing causal direction in the influence of presumed media influence. Communication Research, 37(6), 801-824. https://doi.org/10.1177/0093650210362684
crossref
Tandoc, E. C., Lim, D., & Ling, R. (2020). Diffusion of disinformation: How social media users respond to fake news and why. Journalism, 21(3), 381-398. https://doi.org/10.1177/1464884919868325
crossref
Tewksbury, D. (2004). Preparations for Y2K: Revisiting the Behavioral Component of the Third-Person Effect. Journal of Communication, 54(1), 1). March (2004). 138. 155. https://doi.org/10.1111/j.1460-2466.2004.tb02618.x
crossref
The Royal Society. (2022). January 19; Royal society cautions against censorship of scientific misinformation online [Press Release]. https://royalsociety.org/news/2022/01/scientific-misinformation-report/
Thompson, S. A., & Alba, D. (2022). Heroic tales spread fast, and the facts trail behind. The New York Times, B1.
Tsfati, Y. (2007). Hostile media perceptions, presumed media influence, and minority alienation: The case of Arabs in Israel. Journal of Communication, 57, 632-651. https://doi.org/10.1111/j.1460-2466.2007.00361.x
crossref
Tsfati, Y., & Cohen, J. (2005). The influence of presumed media influence on democratic legitimacy the case of Gaza settlers. Communication Research, 32(6), 794-821. https://doi.org/10.1177/0093650205281057
crossref
Vijaykumar, S., Jin, Y., Rogerson, D., Lu, X., Sharma, S., Maughan, A., & Morris, D. (2021). How shades of truth and age affect responses to COVID-19 (Mis)information: randomized survey experiment among WhatsApp users in UK and Brazil. Humanities and Social Sciences Communications, 8(1), 88. https://doi.org/10.1057/s41599-021-00752-7
crossref
Wang, S., & Kim, K. J. (2020). Restrictive and corrective responses to uncivil user comments on news websites: The influence of presumed influence. Journal of Broadcasting & Electronic Media, 1-20. https://doi.org/10.1080/08838151.2020.1757368
crossref
Wei, R., Lo, V. H., & Golan, G. (2017). Examining the relationship between presumed influence of US news about China and the support for the Chinese government’s global public relations campaigns. International Journal of Communication, 11, 18,
Wintterlin, F., Frischlich, L., Boberg, S., Schatto-Eckrodt, T., Reer, F., & Quandt, T. (2021). Corrective actions in the information disorder. The role of presumed media influence and hostile media perceptions for the countering of distorted user-generated content. Political Communication, 38(6), 773-791. https://doi.org/10.1080/10584609.2021.1888829
crossref
TOOLS
METRICS Graph View
  • 0 Crossref
  •  0 Scopus
  • 274 View
  • 17 Download
Related articles


Editorial Office
1 Hallymdaehak-gil, Chuncheon 24252, Republic of Korea
Tel: +82-33-248-3255    E-mail: editor@hnmr.org                

Copyright © 2025 by Health & New Media Research Institute.

Developed in M2PI