Evaluating the Effectiveness of Mitigation Policies Against Disinformation

By David J. Butts and Michael S. Murillo
Print

Editor's Note: This article originally appeared in SIAM News on June 21, 2023 (https://sinews.siam.org/Details-Page/evaluating-the-effectiveness-of-mitigation-policies-against-disinformation).

The proliferation of disinformation, misinformation, and fake news has become increasingly common in recent years. The development of powerful chatbots, such as ChatGPT, has further fueled concerns about the potential for information manipulation with malicious intent [7]. Disinformation campaigns impacted the 2016 U.S. presidential election [1, 5], contributed to vaccine hesitancy during the COVID-19 pandemic [2, 4, 9], and aided the rise of movements like QAnon [8]. Researchers have developed multiple methods to identify and combat disinformation, including disinformation tracking, bot detection, and credibility scoring. Nevertheless, the spread of malicious information remains a significant problem [6]. We have therefore explored the effectiveness of various mitigation strategies—such as content moderation, user education, and counter-campaigns—to combat disinformation [3].

Figure 1. The percentage of agents with opinion \(B\) in steady state (late time) for a given percentage \(p\) of committed minorities, computed with the binary agreement model. The population begins with \(n_B = 1-p\); only a small minority of agents initially believe \(A\). There is a tipping point, highlighted in gray, after which all agents will agree with the minority opinion \(A\). Figure courtesy of David Butts.
We utilized mathematical models to test our strategies in a repeatable way without the unintended consequences that can arise from experiments on real social media. Several different models facilitate the study of disinformation spread, and each offers unique insights into behavior and possible mitigation techniques. For our study, we chose the binary agreement model with committed minorities [10]; in this model, each individual can believe the truth \((B)\), the disinformation \((A)\), or remain undecided \((AB)\). The dynamics of disinformation spread are modeled as a series of time steps in which all agents interact with a randomly chosen neighbor. The agent and its neighbor are randomly assigned the roles of “speaker” and “listener.” The speaker has some probability of sharing their opinion with the listener, thus determining the opinions of both parties. However, those who are committed to disinformation will always believe disinformation \(A\), regardless of their interactions.

 

Although the binary agreement model is relatively simple, it captures the essential features of disinformation propagation — including a tipping point in the majority opinion that allows for minority rule. In sociology, a tipping point is a threshold at which a group suddenly and rapidly changes its collective behavior. We can examine the tipping point in the binary agreement model by simulating it on a population that begins by believing the truth, except for \(p\) percent who are committed to the disinformation. Once the simulation reaches a steady state, we record the proportion of individuals who believe the truth versus those who are committed to the disinformation for many values of \(p\) (see Figure 1). As the size of the committed minority grows, the number of individuals who are committed to the truth gradually declines. A tipping point occurs when approximately 10 percent of the population is committed to the disinformation, and the committed minority can suddenly overtake the majority opinion. By analyzing the properties of this tipping point (highlighted in gray in Figure 1), we can establish metrics to quantify the effectiveness of our disinformation mitigation strategies. 

We applied the binary agreement model with committed minorities to agents who are connected by artificial social networks, effectively modeling the social networks with a weighted, directed graph \(G=(V,E)\). The graph’s vertices represent individuals \(v_i\in V\), and an edge between individuals—\((v_i,v_j,w_{ij}) \in E\)—represents a connection that allows \(v_i\) to interact with \(v_j\) (with probability \(w_{ij}\)). Animation 1 displays the time dynamics of the model on a Watts-Strogatz network where \(p\) percent of the individuals were initially committed to \(A\) and the remaining were uncommitted to \(B\) for multiple values of \(p\).

Animation 1. Example of numerical simulations of the binary agreement model on a Watts-Strogatz network where zero, two, four, six, eight, 10, and 12 percent of the population is committed to \(A\). We depict a simulated social network on the left, where blue nodes represent individuals with opinion \(B\), yellow nodes represent individuals with opinion \(AB\), and red nodes represent individuals with opinion \(A\). At the beginning of each simulation, all noncommitted individuals share opinion \(B\) and all committed individuals share opinion \(A\). The blue line in the top right indicates the percent of individuals with opinion \(B\) \((n_B)\) versus the number of completed iterations. Once a simulation reaches a steady state, the black line in the lower right shows the number of individuals with opinion \(B\) at the steady state versus the percent of individuals who are committed to \(A\) \((p)\). The characteristic tipping point of the model is evident once 10 percent of the population is committed to \(A\). Animation courtesy of David Butts.

 

We implemented six disinformation mitigation strategies: two strategies each for content moderation, education, and counter-campaigns. The first content moderation strategy targeted influential committed individuals, while the second targeted committed individuals randomly. We used degree and betweenness centrality to identify influential nodes, as individuals with both a high degree and betweenness centrality are more likely to directly spread disinformation to many people and disparate parts of the network. We defined the influence of a node as

\[{\mathcal{I}}(v) = \frac{1}{2}\bigg(C_D(v) + C_B(v)\bigg),\tag1\]

where \(C_D\) is the degree centrality, \(C_B\) is the betweenness centrality, and the \(1/2\) is a normalization. We varied the strength of content moderation for both strategies by removing different numbers of committed individuals.

Our education strategies consist of an early and a late strategy. We modeled early education as training individuals to generally be more skeptical of news and hence less likely to hold rigid opinions. Rather than having agents with fixed opinion \(A\), we implemented early education by allowing for a small, finite probability that the initial \(A\) population would change their opinion. This type of skepticism ensures that incorrect beliefs can eventually be displaced. In contrast, late education in our model targeted social media users and aimed to push them towards the truth by having fact checkers label certain posts as disinformation. We implemented late education by making undecided individuals (those with opinion \(AB\)) be more likely to switch their mindset to the true state and ensuring that those who already believe the truth are inclined to keep that opinion.

We executed the counter-campaign strategies by committing a group of individuals to the truth with the goal of combating people who are dedicated to disinformation. The two strategies only varied in the size of the counter-campaign. We assumed that the counter-campaigns would begin in a local region of the graph, and we set individuals who are committed to disinformation on the opposite side of the graph as those who were committed to the truth. 

Figure 2. Numerical exploration of our three policies of (2a) content moderation, (2b) education, and (2c) counter campaigns; the black curve in each subfigure is a common baseline where no policy exists. We implemented several strategies to test each policy. There are three characteristic changes: softening of the tipping point, changing the \(p\) value of the tipping point, and raising the baseline above the tipping point. Our early-education-based policy had the strongest effect on the tipping point. Figure courtesy of David Butts.

 

Figure 2 showcases the effects of our mitigation strategies, with the line from Figure 1 overlaid in black. Figure 2a examines the impact of content moderation. The blue line represents the removal of individuals based on influence, and the orange line represents the random removal of individuals. After averaging over all of the different numbers of removed individuals, our resulting analysis indicated that content moderation shifted the tipping point forward but had a relatively minor impact overall. The removal of individuals based on our influence metric did not perform significantly better than the random removal of individuals. 

Figure 2b examines the effects of education; early education is depicted in orange, late education is depicted in green, and a combination of both is depicted in blue. Early education had the most substantial impact of any strategy that we studied. Finally, Figure 2c explores the results of counter-campaigns. The small counter-campaign (in orange) had negligible effects and only convinced the original counter-campaign of the truth, while the larger counter-campaign (in blue) persuaded some uncommitted individuals to believe in the truth.

Our study addressed the growth of misinformation, disinformation, and fake news by examining various mitigation strategies. We implemented the binary agreement model on artificial social networks as a proxy for the spread of disinformation on real social networks. After applying our mitigation strategies to this model, we found that content moderation had the smallest impact, followed by counter-campaigns. Early education caused the largest effect by far, therefore suggesting that education efforts that remind individuals to be skeptical of information they encounter can be a successful tactic against misinformation.

David J. Butts delivered a minisymposium presentation on this research at the 2023 SIAM Conference on Computational Science and Engineering (CSE23), which took place in Amsterdam, the Netherlands, earlier this year. He received funding to attend CSE23 through a SIAM Student Travel Award. To learn more about Student Travel Awards and submit an application, visit the online page

SIAM Student Travel Awards are made possible in part by the generous support of our community. To make a gift to the Student Travel Fund, visit the SIAM website.

References
[1] Badawy, A., Ferrara, E., & Lerman, K. (2018). Analyzing the digital traces of political manipulation: The 2016 Russian interference Twitter campaign. In 2018 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM) (pp. 258-265). Barcelona, Spain: Institute of Electrical and Electronics Engineers and Association for Computing Machinery. 
[2] Burki, T. (2019). Vaccine misinformation and social media. The Lancet Digit. Health, 1(6), E258-E259.
[3] Butts, D.J., Bollman, S.A., & Murillo, M.S. (2023). Effectiveness of mitigation policies against disinformation on graphs. In preparation.
[4] Cornwall, W. (2020). Officials gird for a war on vaccine misinformation. Science, 369(6499), 14-15.
[5] Fourney, A., Racz, M.Z., Ranade, G., Mobius, M., & Horvitz, E. (2017). Geographic and temporal trends in fake news consumption during the 2016 US presidential election. In CIKM17: Proceedings of the 2017 ACM conference on information and knowledge management (pp. 2071-2074). Singapore: Association for Computing Machinery.
[6] Gallo, J.A., & Cho, C.Y. (2021). Social media: Misinformation and content moderation issues for Congress (Congressional Research Service Report R46662). Washington, D.C.
[7] Hsu, T., & Thompson, S.A. (2023, February 8). Disinformation researchers raise alarms about A.I. chatbots. The New York Times. Retrieved from https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html.
[8] Hughey, M.W. (2021). The who and why of QAnon’s rapid rise. New Labor Forum, 30(3), 76-87.
[9] Loomba, S., de Figueiredo, A., Piatek, S.J., de Graaf, K., & Larson, H.J. (2021). Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behav., 5(3), 337-348.
[10] Xie, J., Sreenivasan, S., Korniss, G., Zhang, W., Lim, C., & Szymanski, B.K. (2011). Social consensus through the influence of committed minorities. Phys. Rev. E, 84(1), 011130.

David J. Butts is a Ph.D. candidate in the Department of Computational Mathematics, Science and Engineering at Michigan State University. His research interests include agent-based modeling, reinforcement learning, and high-performance computing. 
Michael S. Murillo is a professor in the Department of Computational Mathematics, Science and Engineering at Michigan State University. 
Categories: Magazine, Articles
Tags:

Please login or register to post comments.

Name:
Email:
Subject:
Message:
x