2020 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation
Oct 19-21, 2020, Lehman Auditorium, George Washington University, Washington DC, USA

Challenge 2 - Disinformation

Overview

Disinformation is a growing problem on the web. Often this information is spread through multiple social media. While disinformation itself is not a new problem; it’s spread, the rate of spread, the potential for global impact and so on are growing due to the use of social media. Social media providers themselves are concerned and examining what can be done in this space.

In this year’s SBP-BRiMS second challenge problem, we ask participants to consider the issue of the spread of disinformation on the web.

The specific questions of interest are:

  • Can we automatically and accurately classify a message as containing disinformation? And the related question, what are the characteristics of disinformation that make it distinct from other information? Eligible entries might compare against Truthy.
  • What are the characteristics of individuals or groups that put them at risk to succumbing to disinformation? And the related question, how can you measure the extent to which an individual or group has succumbed to disinformation? Algorithm for measuring risk must be provided and validation strategy explained.
  • How does disinformation spread within and across media? And the related question, does disinformation spread differently than other information? Information on how spread was measured must be explained.

These are the specific questions to be addressed. Each response may address one or more of these questions. All entries must have both a strong social theory, political theory or policy perspective and a strong methodology perspective.

Data Sets

The following data is publicly available and contains some information of relevance to the challenge. Participants may use this or other data.

Challenge Committee

  • Kathleen M. Carley
  • Nitin Agarwal

Submit Questions Regarding Challenge

All questions and concerns can be sent to sbp-brims@andrew.cmu.edu

Some useful references:

Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Lu, “Fake News Detection on Social Media: A Data Mining Perspective,” ACM SIGKDD Explorations Newsletter (2017): arXiv:1708.01967.

Starbird, Kate, Jim Maddock, Mania Orand, Peg Achterman, and Robert M. Mason. "Rumors, false flags, and digital vigilantes: Misinformation on twitter after the 2013 boston marathon bombing." iConference 2014 Proceedings (2014).

Uberti, David. “How Misinformation Goes Viral: A Truthy Story.” Columbia Journalism Review, September 3, 2014.

Aditi Gupta, Hemank Lamba, Ponnurangam Kumaraguru, and Anupam Joshi. Faking sandy: Characterizing and identifying fake images on twitter during hurricane sandy. WWW ’13 Companion, pages 729–736, 2013.

Matthew Benigni, Kenneth Joseph and Kathleen M. Carley, 2017, “Online Extremism and the Communities that Sustain It: Detecting the ISIS Supporting Community on Twitter,” PLOS ONE, 12(12), e0181405

Samer Al-khateeb and Nitin Agarwal. Examining Botnet Behaviors for Propaganda Dissemination: A Case Study of ISIL's Beheading Videos-based Propaganda. In Proceedings of the Behavior Analysis, Modeling, and Steering (BEAMS 2015) co-located with the IEEE International Conference on Data Mining (ICDM 2015), November 14-17, 2015, Atlantic City, New Jersey.