Session: Catching Out-of-Context Misinformation with Self-supervised Learning
Despite the recent attention to DeepFakes and other forms of image manipulations, one of the most prevalent ways to mislead audiences on social media is the use of unaltered images in a new but false context, commonly known as out-of-context image use. The danger of out-of-context images is that little technical expertise is required, as one can simply take an image from a different event and create a highly convincing but potentially misleading message. At the same time, it is extremely challenging to detect misinformation based on out-of-context images given that the visual content by itself is not manipulated; only the image-text combination creates misleading or false information. In order to detect these out-of-context images, several online fact-checking initiatives have been launched by newsrooms. However, they all heavily rely on manual human efforts to verify each post factually and to determine if a fact-checking claim should be labeled as an "out-of-context". In this talk, I will discuss in detail dangerous consequences caused by the spread of out-of-context images and talk about how can we build models that automatically detect these out-of-context image text pairs.
This recording is not available yet.
Bio: Shivangi Aneja
Shivangi Aneja is a Ph.D. student at Visual Computing Lab, Technical University of Munich. Prior to that, she obtained her Master's degree in Informatics (magna cum laude) from the Technical University of Munich and Bachelors degree (gold medalist) in Computer Science from the National Institute of Technology, Hamirpur (India). Her research interest lies at the intersection of deep learning, computer vision, natural language processing, and media forensics. With her work, she wishes to solve real-world AI problems that can have a positive impact on society.