Your conditions: 郝志峰
  • 面向图文匹配任务的多层次图像特征融合算法

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2019-01-28 Cooperative journals: 《计算机应用研究》

    Abstract: The existing mainstream methods use the pre-trained convolutional neural networks to extract image features and usually have the following limitations: a)Only using a single layer of pre-trained features to represent image; b)Inconsistency between the pre-trained task and the actual research task. These limitations result in that the existing methods of image-text matching cannot make full use of image features and is easily influenced by the noises. To solve the above limitations, this paper used multi-layer features from a pre-trained network and proposed a fusion algorithm of multi-level image features accordingly. Under the guidance of the image-text matching objective function, the proposed algorithm fused the multi-level pre-trained image features and reduced their dimensionality using a multi-layer perceptron to generate fusion features. It is able to make full use of pre-trained features and successfully reduce the influences of noises. The experimental results show that the proposed fusion algorithm makes better use of pre-trained image features and outperforms the methods using single-level features in the image-text matching task.

  • 基于叠层循环神经网络的语义关系分类模型

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2018-11-29 Cooperative journals: 《计算机应用研究》

    Abstract: The method based on recurrent neural network combined with syntactic structure is widely used in relation classification, and the neural network is used to automatically acquire features and realize relation classification. However, the existing methods are mainly based on a single specific syntactic structure model, and the model of a specific syntactic structure cannot be transferred to other types of syntactic structures. Aiming at this problem, a hierarchical recurrent neural network model with multi-syntactic structure is proposed. The hierarchical recurrent neural network is divided into two layers for network construction. Firstly, entity pre-training is performed in the sequence layer. The Bi-LSTM-CRF fusion Attention mechanism is used to improve the model's attention to the entity information on the text sequence, thereby obtaining more accurate. The more accurate entity feature information promotes better classification in the relation layer stage. Secondly, in the relation layer, the Bi-Tree-LSTM is nested above the sequence layer, and the hidden state and entity feature information of the sequence layer is passed into the relation layer, then three different syntax structures are weighted learned using the shared parameters and classify the semantic relation finally. The experimental results show that the model has a marco-F1 value of 85.9% on the SemEval-2010 Task8 corpus, and further improves the robustness of the model.