Logically Consistent Adversarial Attacks for Soft Theorem Provers

Logically Consistent Adversarial Attacks for Soft Theorem Provers

Alexander Gaskell, Yishu Miao, Francesca Toni, Lucia Specia

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 4129-4135. https://doi.org/10.24963/ijcai.2022/573

Recent efforts within the AI community have yielded impressive results towards “soft theorem proving” over natural language sentences using language models. We propose a novel, generative adversarial framework for probing and improving these models’ reasoning capabilities. Adversarial attacks in this domain suffer from the logical inconsistency problem, whereby perturbations to the input may alter the label. Our Logically consistent AdVersarial Attacker, LAVA, addresses this by combining a structured generative process with a symbolic solver, guaranteeing logical consistency. Our framework successfully generates adversarial attacks and identifies global weaknesses common across multiple target models. Our analyses reveal naive heuristics and vulnerabilities in these models’ reasoning capabilities, exposing an incomplete grasp of logical deduction under logic programs. Finally, in addition to effective probing of these models, we show that training on the generated samples improves the target model’s performance.
Keywords:
Natural Language Processing: Text Classification
Machine Learning: Adversarial Machine Learning
Natural Language Processing: Language Models
Natural Language Processing: Question Answering
Machine Learning: Neuro-Symbolic Methods