Current Location:home > Browse

1. chinaXiv:202010.00060 [pdf]

一种基于BERT和文本相似度的先进的ICD9术语标准化方法

刘宜佳; 纪斌; 余杰; 谭郁松; 马俊; 吴庆波
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

ICD-9术语标准化任务旨在将医生在病历中记录的口语术语标准化为《国际疾病分类》(ICD-9)第九版中定义的标准术语。在本文中,我们首先提出一种基于BERT和文本相似度的方法(BTSBM),该方法将BERT分类模型与文本相似度计算算法相结合:1)使用N-gram算法为每种口语术语生成候选标准术语集(CSTS) ,用作下一步的训练数据集和测试数据集; 2)使用BERT分类模型对正确的标准术语进行分类。在这种BTSBM方法中,如果采用较大规模的CSTS作为测试数据集,则训练数据集也需要保持较大规模。但是,每个CSTS中只有一个正样本。因此,扩大规模将导致正负样本比例的严重失衡,这将严重降低系统性能。如果我们将测试数据集保持相对较小,则CSTS准确性(CSTSA)将大大降低,这将导致非常低的系统性能上限。为了解决上述问题,我们然后提出了一种优化的术语标准化方法,称为先进的BERT和基于文本相似性方法(ABTSBM),其中1)使用大规模初始CSTS来维持较高的CSTSA以确保较高的系统性能上限; 2)根据身体结构对CSTS进行降噪,以减轻正负样本的不平衡而不降低CSTSA; 3)引入focal loss损失函数以进一步促进正负样本的平衡。实验表明,ABTSBM方法的精度高达83.5%,比BTSBM高0.6%,而ABTSBM的计算成本比BTSBM低26.7%。

submitted time 2020-10-27 Hits1321Downloads180 Comment 0

2. chinaXiv:202010.00061 [pdf]

基于span分类模型的医学概念抽取方法

汤勇韬; 余杰; 李莎莎; 纪斌; 谭郁松; 吴庆波
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

最近,如何构造电子病历(EMR)引起了研究人员的极大关注。从EMR中提取临床概念是EMR结构化的关键部分。临床概念提取的性能将直接影响与EMR结构化相关的下游任务的性能。但是,主流方法中,序列标记模型有一些缺点。基于序列标记的临床概念提取方法不符合人类的语言认知模型。同时,这种方法产生的提取结果很难与下游任务耦合,这将导致错误传播并影响下游任务的性能。为了解决这些问题,我们提出了一种基于span分类的方法,通过考虑字符序列的整体语义而不是每个字符的语义来提高临床概念提取任务的性能。我们将此模型称为span分类模型。实验表明,span分类模型在2012年i2b2 NLP挑战赛的语料库中获得了最佳的微观平均F1得分(81.22%),并获得了与2010年i2b2 NLP挑战赛的SOTA相当的F1得分(89.25%)。此外,我们的方法的性能始终优于序列标记模型,例如BiLSTM-CRF模型和softmax分类器。

submitted time 2020-10-27 Hits1177Downloads155 Comment 0

3. chinaXiv:202010.00067 [pdf]

融合基于注意力机制的span特定和上下文语义表示的基于span的实体和关系联合抽取

Bin Ji
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

基于span的联合提取模型已显示出它们在实体识别和关系提取上的效率。 这些模型将文本span视为候选实体,并将span元组视为候选关系元组。 span语义表示在实体识别和关系提取中都是共享的,而现有模型无法很好地捕获这些候选实体和关系的语义。 为了解决这些问题,我们引入了基于span的联合提取框架和基于注意的语义表示。 特别地,注意力用于计算语义表示,包括span特定和上下文表示。 我们将进一步研究四种注意变量在生成上下文语义表示中的作用。 实验表明,我们的模型优于以前的系统,并在ACE2005,CoNLL2004和ADE上达到了最优的结果。

submitted time 2020-10-26 Hits1184Downloads173 Comment 0

4. chinaXiv:201910.00076 [pdf]

Masked Sentence Model based on BERT for Move Recognition in Medical Scientific Abstracts

Yu, Gaihong; Zhang, Zhixiong; Liu, Huan ; Ding, Liangping
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

Purpose: Move recognition in scientific abstracts is an NLP task of classifying sentences of the abstracts into different types of language unit. To improve the performance of move recognition in scientific abstracts, a novel model of move recognition is proposed that outperforms BERT-Base method. Design: Prevalent models based on BERT for sentence classification often classify sentences without considering the context of the sentences. In this paper, inspired by the BERT's Masked Language Model (MLM), we propose a novel model called Masked Sentence Model that integrates the content and contextual information of the sentences in move recognition. Experiments are conducted on the benchmark dataset PubMed 20K RCT in three steps. And then compare our model with HSLN-RNN, BERT-Base and SciBERT using the same dataset. Findings: Compared with BERT-Base and SciBERT model, the F1 score of our model outperforms them by 4.96% and 4.34% respectively, which shows the feasibility and effectiveness of the novel model and the result of our model comes closest to the state-of-the-art results of HSLN-RNN at present. Research Limitations: The sequential features of move labels are not considered, which might be one of the reasons why HSLN-RNN has better performance. And our model is restricted to dealing with bio-medical English literature because we use dataset from PubMed which is a typical bio-medical database to fine-tune our model. Practical implications: The proposed model is better and simpler in identifying move structure in scientific abstracts, and is worthy for text classification experiments to capture contextual features of sentences. Originality: The study proposes a Masked Sentence Model based on BERT which takes account of the contextual features of the sentences in abstracts in a new way. And the performance of this classification model is significantly improved by rebuilding the input layer without changing the structure of neural networks.

submitted time 2019-10-29 Hits45860Downloads1293 Comment 0

5. chinaXiv:201910.00073 [pdf]

智慧中医:肺癌处方智能生成模型

阮春阳
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

本文在中医知识挖掘工作积累的基础上,对中医肺癌临床处方数据调研分析,针对其数据特点,构建深度学习模型挖掘处方中症状和中药之间隐藏关系等规律,在此过程中与医生沟通验证模型的准确性,最终实现处方智能生成并达到较高的临床有效性,辅助医生诊断,提升临床效率,推动临床诊断创新发展。

submitted time 2019-10-15 Hits7495Downloads1002 Comment 0

6. chinaXiv:201905.00012 [pdf]

Transfer Learning for Scientific Data Chain Extraction in Small Chemical Corpus with BERT-CRF Model

Na Pang; Li Qian; Weimin Lyu; Jin-Dong Yang
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

Abstract. Computational chemistry develops fast in recent years due to the rapid growth and breakthroughs in AI. Thanks for the progress in natural language processing, researchers can extract more fine-grained knowledge in publications to stimulate the development in computational chemistry. While the works and corpora in chemical entity extraction have been restricted in the biomedicine or life science field instead of the chemistry field, we build a new corpus in chemical bond field anno- tated for 7 types of entities: compound, solvent, method, bond, reaction, pKa and pKa value. This paper presents a novel BERT-CRF model to build scientific chemical data chains by extracting 7 chemical entities and relations from publications. And we propose a joint model to ex- tract the entities and relations simultaneously. Experimental results on our Chemical Special Corpus demonstrate that we achieve state-of-art and competitive NER performance.

submitted time 2019-05-12 Hits20154Downloads1022 Comment 0

7. chinaXiv:201902.00062 [pdf]

Multimedia Short Text Classification via Deep RNN-CNN Cascade

陶爱山
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

Abstract—With the rapid development of mobile technologies, social networking softwares such as Twitter, Weibo and WeChat are becoming ubiquitous in our every day life. These social networks generate a deluge of data that consists of not only plain texts but also images, videos, and audios. As a consequence, the traditional approaches that classify the short text by counting only the key words become inadequate. In this paper, we propose a multimedia short text classification approach by deep RNN(Recurrent neural network ) and CNN(Convolutional neural network) cascade. We first employ an LSTM(Long short-term memory) net- work to convert the information in the images into text information. Then a convolutional neural network is used to classify the multimedia texts by taking into account both the texts generated from the image as well as those contained in the initial message. It is seen through experiments using MSCOCO dataset that the proposed method exhibits significant performance improvement over the traditional methods.

submitted time 2019-02-22 Hits7008Downloads598 Comment 0

8. chinaXiv:201809.00191 [pdf]

基于代价敏感集成极限学习机的文本分类方法

李明; 肖培伦; 张矩; 顾心盟
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

加权极限学习机对不同类别的样本赋予不同的权值,在一定程度上提高了分类准确 率,但加权极限学习机只考虑了不同类别样本之间差异,忽视了样本噪声和同类样本之间的 差异。本文提出了一种基于文本类别信息熵的极限学习机集成方法,该方法以Adaboost.M1 为算法框架,通过文本的类内分布熵和类间分布熵生成文本类别信息熵,由文本类别信息熵 构造代价敏感矩阵,把代价敏感极限学习机集成到Adaboost.M1 框架中。实验结果表明,该 方法与其他类型的极限学习机相比较有更好的准确性和泛化性。

submitted time 2018-09-27 Hits1953Downloads1060 Comment 0

9. chinaXiv:201710.00001 [pdf]

Network of Recurrent Neural Networks

Wang, Chao-Ming
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

We describe a class of systems theory based neural networks called "Network Of Recurrent neural networks" (NOR), which introduces a new structure level to RNN related models. In NOR, RNNs are viewed as the high-level neurons and are used to build the high-level layers. More specifically, we propose several methodologies to design different NOR topologies according to the theory of system evolution. Then we carry experiments on three different tasks to evaluate our implementations. Experimental results show our models outperform simple RNN remarkably under the same number of parameters, and sometimes achieve even better results than GRU and LSTM.

submitted time 2017-10-02 Hits5071Downloads1313 Comment 0

10. chinaXiv:201703.00230 [pdf]

藏文分词及其在藏汉机器翻译中的应用

孙萌; 华却才让; 姜文斌; 吕雅娟; 刘群
Subjects: Computer Science >> Natural Language Understanding and Machine Translation

本文提出一种基于判别式模型的藏文分词方法,并研究了藏文分词在藏汉机器翻译中的应用。根据藏文构词特性,通过最小构词粒度切分、感知机解码和分词结果重排序三个模块,显著提升了藏文分词质量。在此基础上,我们还提出了基于词图的藏汉机器翻译方法,缓解了分词错误在翻译中的传播,可以使翻译质量明显提高。

submitted time 2017-03-10 Hits2608Downloads1898 Comment 0

12  Last  Go  [2 Pages/ 15 Totals]