Abstract: |
The rise of software bots in social media have helped to expand the spread of disinformation in social media. As a result of the increased spread of disinformation, we have seen the emergence of polarization in social networks, for example the formation of echo chambers (Tornberg, PloS one, 2018). Social media is a target for disinformation and its polarizing effects due to its low cost to access and ease of sharing and discussing stories without fact-checking oversights (Shu et al., SIGKDD Explor. Newsl., 2017). Without interventions, disinformation will continue to spread through social networks and can lead to negative effects in society, for example anti-vaccine disinformation (Loomba et al., Nature human behaviour, 2021; Burki, The Lancet Digital Health, 2019; Cornwall, Science, 2020) convincing people to not get inoculated or potential influences in the 2016 United State presidential election (Badawy et al., IEEE/ACM ASONAM 2018; Fourney et al., CIKM, 2017). There is a growing literature of applying agent-based modeling to the spread of disinformation (Ross et al., European Journal of Information Systems, 2019; Rajabi, et al., AAMAS, 2020; Beskow et al., IEE WSC, 2019; Brainard et al., Revue d’epidemiologie et de sante publique, 2020). We contribute to these studies through developing an agent-based model of a social network where agents can share ideas. The social network is constructed from a graph, where agents are represented by nodes and agent's social connections by edges. Each agent has a set of opinions that they update by interacting with their connections and applying a generalized Attraction-Repulsion Model (Axelrod et al., PNAS, 2021). We introduce an attacker agent whose goal is to maximize the spread of disinformation in the network. The attacking agent can create edges with other agents on the network, and adjust the extremeness of their disinformation. If non-attacking agents detect that the attacking agent's opinions are too extreme, they will remove their connection from the attacking agent. While the rules are known to the attacker, the optimal strategy is not known a priori. We utilize reinforcement learning in the attacking agent to learn the best strategy to maximize the spread of disinformation in a social network. |