Current Location: > Detailed Browse

一种基于“预读”及简单注意力机制的句子压缩方法 postprint

请选择邀稿期刊:
Abstract: This paper proposed a method in English sentence compression based on Pre-readin” and Simple Attention Mechanism. On the basis of Gated Recurrent Unit (GRU) and Encoder-Decoder, this paper modeled the original sentence semantics twice in the encoding stage. The first result was used as a global information to strengthen the second semantic model, thus obtaining a more comprehensive and accurate semantic vector. With full consideration of the particularity of the deleted sentence compression, this paper simply adopt the 3t-Attention mechanism in the decoding stage to improve the efficiency and accuracy of prediction, which means that the semantic vectors most relevant to the current decoding time step are inputted to the decoder. The results from the experiments on the Google news sentence compression dataset show that our model significantly outperforms all the recent state-of-the-art methods. Therefore, "Pre-reading" and Simple Attention Mechanism can effectively improve the accuracy of English sentence compression.

Version History

[V1] 2018-05-20 08:45:46 ChinaXiv:201805.00290V1 Download
Download
Preview
License Information
metrics index
  •  Hits2871
  •  Downloads1605
Comment
Share