2021 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation
July 06-09, 2021, Virtual

Disinformation Challenge

Overview

Disinformation is a growing problem on the web. Often this information is spread through multiple social media. While disinformation itself is not a new problem; it’s spread, the rate of spread, the potential for global impact and so on are growing due to the use of social media. Social media providers themselves are concerned and examining what can be done in this space.

In this year’s SBP-BRiMS second challenge problem, we ask participants to consider the issue of the spread of COVID-19 disinformation on the web.

Specific problems of interest are:

  • Identify and assess the spread of 4 or more disinformation stories.
  • What disinformation is being spread, by whom, and with what impact?
  • Using this data build a model to test out two different methods of countering disinformation.
  • What evidence is there of actor coordination, and how is disinformation being used in this coordinated activity?

If you are working on the Parler data disinformation challenge, please access the data from https://zenodo.org/record/4442460#.YHXKomRKh3y.

Specific problems of interest are:

  • Can we automatically and accurately classify a message as containing disinformation? And the related question, what are the characteristics of disinformation that make it distinct from other information? Eligible entries might compare against Truthy.
  • What are the characteristics of individuals or groups that put them at risk to succumbing to disinformation? And the related question, how can you measure the extent to which an individual or group has succumbed to disinformation? Algorithm for measuring risk must be provided and validation strategy explained.
  • How does disinformation spread within and across media? And the related question, does disinformation spread differently than other information? Information on how spread was measured must be explained.

These are the specific questions to be addressed. Each response may address one or more of these questions. All entries must have both a strong social theory, political theory or policy perspective and a strong methodology perspective.

Data Sets

The following datasets are publicly available that contain information of relevance to the challenge. Participants may extend these datasets by adding fact checker information, or newspaper information from sources like GDELT.

Challenge Committee

  • Kathleen M. Carley
  • Nitin Agarwal

Submit Questions Regarding Challenge

All questions and concerns can be sent to sbp-brims@andrew.cmu.edu

Some useful references:

Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Lu, “Fake News Detection on Social Media: A Data Mining Perspective,” ACM SIGKDD Explorations Newsletter (2017): arXiv:1708.01967.

Starbird, Kate, Jim Maddock, Mania Orand, Peg Achterman, and Robert M. Mason. "Rumors, false flags, and digital vigilantes: Misinformation on twitter after the 2013 boston marathon bombing." iConference 2014 Proceedings (2014).

Uberti, David. “How Misinformation Goes Viral: A Truthy Story.” Columbia Journalism Review, September 3, 2014.

Aditi Gupta, Hemank Lamba, Ponnurangam Kumaraguru, and Anupam Joshi. Faking sandy: Characterizing and identifying fake images on twitter during hurricane sandy. WWW ’13 Companion, pages 729–736, 2013.

Matthew Benigni, Kenneth Joseph and Kathleen M. Carley, 2017, “Online Extremism and the Communities that Sustain It: Detecting the ISIS Supporting Community on Twitter,” PLOS ONE, 12(12), e0181405

Samer Al-khateeb and Nitin Agarwal. Examining Botnet Behaviors for Propaganda Dissemination: A Case Study of ISIL's Beheading Videos-based Propaganda. In Proceedings of the Behavior Analysis, Modeling, and Steering (BEAMS 2015) co-located with the IEEE International Conference on Data Mining (ICDM 2015), November 14-17, 2015, Atlantic City, New Jersey.