ARMIN: Towards a More Efficient and Light-weight Recurrent Memory Network

ARMIN: Towards a More Efficient and Light-weight Recurrent Memory Network

Zhangheng Li, Jia-Xing Zhong, Jingjia Huang, Tao Zhang, Thomas Li, Ge Li

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2944-2951. https://doi.org/10.24963/ijcai.2019/408

In recent years, memory-augmented neural networks(MANNs) have shown promising power to enhance the memory ability of neural networks for sequential processing tasks. However, previous MANNs suffer from complex memory addressing mechanism, making them relatively hard to train and causing computational overheads. Moreover, many of them reuse the classical RNN structure such as LSTM for memory processing, causing inefficient exploitations of memory information. In this paper, we introduce a novel MANN, the Auto-addressing and Recurrent Memory Integrating Network (ARMIN) to address these issues. The ARMIN only utilizes hidden state h_t for automatic memory addressing, and uses a novel RNN cell for refined integration of memory information. Empirical results on a variety of experiments demonstrate that the ARMIN is more light-weight and efficient compared to existing memory networks. Moreover, we demonstrate that the ARMIN can achieve much lower computational overhead than vanilla LSTM while keeping similar performances. Codes are available on github.com/zoharli/armin.
Keywords:
Machine Learning: Time-series;Data Streams
Machine Learning: Deep Learning
Knowledge Representation and Reasoning: Logics for Knowledge Representation
Humans and AI: Cognitive Modeling
Multidisciplinary Topics and Applications: Autonomic Computing