Adaboost with Auto-Evaluation for Conversational Models

Adaboost with Auto-Evaluation for Conversational Models

Juncen Li, Ping Luo, Ganbin Zhou, Fen Lin, Cheng Niu

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 4173-4179. https://doi.org/10.24963/ijcai.2018/580

We propose a boosting method for conversational models to encourage them to generate more human-like dialogs. In our method, we consider existing conversational models as weak generators and apply Adaboost to update those models. However, conventional Adaboost cannot be directly applied on conversational models. Because for conversational models, conventional Adaboost cannot adaptively adjust the weight on the instance for subsequent learning, result from the simple comparison between the true output y (to an input x) and its corresponding predicted output y' cannot directly evaluate the learning performance on x. To address this issue, we develop the Adaboost with Auto-Evaluation (called AwE). In AwE, an auto-evaluator is proposed to evaluate the predicted results, which makes it applicable to conversational models. Furthermore, we present the theoretical analysis that the training error drops exponentially fast only if certain assumption over the proposed auto-evaluator holds. Finally, we empirically show that AwE visibly boosts the performance of existing single conversational models and also outperforms the other ensemble methods for conversational models.
Keywords:
Natural Language Processing: Dialogue
Natural Language Processing: Natural Language Generation