Fully Test-Time Adaptation for Feature Decrement in Tabular Data

Fully Test-Time Adaptation for Feature Decrement in Tabular Data

Zi-Jian Cheng, Zi-Yi Jia, Kun-Yang Yu, Zhi Zhou, Lan-Zhe Guo

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 4940-4948. https://doi.org/10.24963/ijcai.2025/550

Tabular data is widely adopted in various machine learning tasks. Current tabular data learning mainly focuses on closed environments, while in real-world applications, open environments are often encountered, where distribution shifts and feature decrements occur, leading to severe performance degradation. Previous studies have primarily focused on addressing distribution shifts, while feature decrements, a unique challenge in tabular data learning, have received relatively little attention. In this paper, we present the first comprehensive study on the problem of Fully Test-Time Adaptation for Feature Decrement in Tabular Data. Through empirical analysis, we identify the suboptimality of existing missing-feature imputation methods and the limited applicability of missing-feature adaptation approaches. To address these challenges, we propose a novel method, LLM-IMPUTE, which leverages Large Language Models (LLMs) to impute missing features without relying on training data. Furthermore, we introduce Augmented-Training LLM (ATLLM), a method designed to enhance the robustness of feature decrements by simulating feature-decrement scenarios during the training phase to address tasks that can not be imputed by LLM-IMPUTE. Extensive experimental results demonstrate that our proposal significantly improves both performance and robustness in missing feature imputation and adaptation scenarios.
Keywords:
Machine Learning: ML: Open-World/Open-Set/OOD Learning
Machine Learning: ML: Feature extraction, selection and dimensionality reduction
Natural Language Processing: NLP: Language models