What is Beneath Misogyny: Misogynous Memes Classification and Explanation
What is Beneath Misogyny: Misogynous Memes Classification and Explanation
Kushal Kanwar, Dushyant Singh Chauhan, Gopendra Vikram Singh, Asif Ekbal
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
AI and Social Good. Pages 9746-9753.
https://doi.org/10.24963/ijcai.2025/1083
Memes are popular in the modern world and are distributed primarily for entertainment. However, harmful ideologies such as misogyny can be propagated through innocent-looking memes. The detection and understanding of why a meme is misogynous is a research challenge due to its multimodal nature (image and text) and its nuanced manifestations across different societal contexts. We introduce a novel multimodal approach, namely, MM-Misogyny to detect, categorize, and explain misogynistic content in memes. MM-Misogyny processes text and image modalities separately and unifies them into a multimodal context through a cross-attention mechanism. The resulting multimodal context is then easily processed for labeling, categorization, and explanation via a classifier and Large Language Model (LLM). The evaluation of the proposed model is performed on a newly curated dataset (What’s Beneath Misogynous Stereotyping (WBMS)) created by collecting misogynous memes from cyberspace and categorizing them into four categories, namely, Kitchen, Leadership, Working, and Shopping. The model not only detects and classifies misogyny, but also provides a granular understanding of how misogyny operates in operates in domains of life. The results demonstrate the superiority of our approach compared to existing methods. The code and dataset are available at https://github.com/Misogyny.
Keywords:
Multidisciplinary Topics and Applications: General
AI Ethics, Trust, Fairness: General
Machine Learning: General
