Fine-Grained and Efficient Self-Unlearning with Layered Iteration
Fine-Grained and Efficient Self-Unlearning with Layered Iteration
Hongyi Lyu, Xuyun Zhang, Hongsheng Hu, Shuo Wang, Chaoxiang He, Lianyong Qi
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 7643-7651.
https://doi.org/10.24963/ijcai.2025/850
As machine learning models become widely deployed in data-driven applications, ensuring compliance with the 'right to be forgotten' as required by many privacy regulations is vital for safeguarding user privacy. To forget the given data, existing re-labeling based unlearning methods employ a single-step adjustment scheme that revises the decision boundaries in one re-labeling phase. However, such single-step approaches lead to coarse-grained changes in decision boundaries among the remaining classes and impose adverse effects on the model utility. To address these limitations, we propose 'Self-Unlearning with Layered Iteration (SULI),' a novel unlearning approach that introduces a layered iteration strategy to re-label the forgetting data iteratively and refine the decision boundaries progressively. We further develop a 'Selective Probability Adjustment (SPA)' technique, which uses a soft-label mechanism to promote smoother decision-boundary transitions. Comprehensive experiments on three benchmark datasets demonstrate that SULI achieves superior performance in effectiveness, efficiency, and privacy compared to the state-of-the-art baselines in both class-wise and instance-wise unlearning scenarios. The source code is released at https://github.com/Hongyi-Lyu-MQ/SULI.
Keywords:
Multidisciplinary Topics and Applications: MTA: Security and privacy
AI Ethics, Trust, Fairness: ETF: Safety and robustness
AI Ethics, Trust, Fairness: ETF: Trustworthy AI
