Current Location:home > Browse
Your conditions: 2019-10-29(2)

1. chinaXiv:201910.00075 [pdf]

道德概念的空间形象性:语言因素和具身因素的共同作用

王汉林; 蒋泽亮; 冯晓慧; 鲁忠义
Subjects: Psychology >> Cognitive Psychology

采用事件相关电位(ERP)技术探讨抽象道德概念的空间形象性效应,以及语言因素和具身因素对该效应的影响机制及其加工进程。实验一检验词对空间位置对道德词对语义判断(反义程度)所产生的影响,结果表明不符合空间形象性呈现条件(即道德-下,不道德-上)诱发了较大的N400,并且词对语义判断的反应时较长;实验二检验词对语义的反义程度对道德词对空间形象性判断所产生的影响,结果表明语义无关词对诱发了较大的N200和N700,并且词对空间形象性判断的反应时较长。研究结果表明,抽象道德概念的加工能够表现出空间形象性效应,该效应由语言因素和具身因素共同塑造,前者在概念加工过程中优先被激活并发挥持久影响,后者仅在概念加工的中期发挥作用。

submitted time 2019-10-29 Hits1059Downloads75 Comment 0

2. chinaXiv:201910.00076 [pdf]

Masked Sentence Model based on BERT for Move Recognition in Medical Scientific Abstracts

Yu, Gaihong; Zhang, Zhixiong; Liu, Huan ; Ding, Liangping
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

Purpose: Move recognition in scientific abstracts is an NLP task of classifying sentences of the abstracts into different types of language unit. To improve the performance of move recognition in scientific abstracts, a novel model of move recognition is proposed that outperforms BERT-Base method. Design: Prevalent models based on BERT for sentence classification often classify sentences without considering the context of the sentences. In this paper, inspired by the BERT's Masked Language Model (MLM), we propose a novel model called Masked Sentence Model that integrates the content and contextual information of the sentences in move recognition. Experiments are conducted on the benchmark dataset PubMed 20K RCT in three steps. And then compare our model with HSLN-RNN, BERT-Base and SciBERT using the same dataset. Findings: Compared with BERT-Base and SciBERT model, the F1 score of our model outperforms them by 4.96% and 4.34% respectively, which shows the feasibility and effectiveness of the novel model and the result of our model comes closest to the state-of-the-art results of HSLN-RNN at present. Research Limitations: The sequential features of move labels are not considered, which might be one of the reasons why HSLN-RNN has better performance. And our model is restricted to dealing with bio-medical English literature because we use dataset from PubMed which is a typical bio-medical database to fine-tune our model. Practical implications: The proposed model is better and simpler in identifying move structure in scientific abstracts, and is worthy for text classification experiments to capture contextual features of sentences. Originality: The study proposes a Masked Sentence Model based on BERT which takes account of the contextual features of the sentences in abstracts in a new way. And the performance of this classification model is significantly improved by rebuilding the input layer without changing the structure of neural networks.

submitted time 2019-10-29 Hits204Downloads65 Comment 0

  [1 Pages/ 2 Totals]