您当前的位置: > 详细浏览

Faster Zero-shot Multi-modal Entity Linking via Visual#2;Linguistic Representation

Faster Zero-shot Multi-modal Entity Linking via Visual#2;Linguistic Representation

摘要:Multi-modal entity linking plays a crucial role in a wide range of knowledge-based modal-fusion tasks, i.e., multi-modal retrieval and multi-modal event extraction. We introduce the new ZEro-shot Multi-modal Entity Linking (ZEMEL) task, the format is similar to multi-modal entity linking, but multi-modal mentions are linked to unseen entities in the knowledge graph, and the purpose of zero-shot setting is to realize robust linking in highly specialized domains. Simultaneously, the inference efficiency of existing models is low when there are many candidate entities. On this account, we propose a novel model that leverages visual#2; linguistic representation through the co-attentional mechanism to deal with the ZEMEL task, considering the trade-off between performance and efficiency of the model. We also build a dataset named ZEMELD for the new task, which contains multi-modal data resources collected from Wikipedia, and we annotate the entities as ground truth. Extensive experimental results on the dataset show that our proposed model is effective as it significantly improves the precision from 68.93% to 82.62% comparing with baselines in the ZEMEL task.

版本历史

[V1] 2022-11-28 22:05:08 ChinaXiv:202211.00418V1 下载全文
点击下载全文
许可声明
metrics指标
  • 点击量1617
  • 下载量177
评论
分享
邀请专家评阅