基于“图像-文本”间关联增强的多模态猪病知识图谱融合方法
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金面上项目(32472007)和安徽省高等学校科学研究项目(自然科学类)重点项目(2023AH051020)


“Image-Text” Association Enhanced Multi-modal Swine Disease Knowledge Graph Fusion
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    传统的猪病防治主要依赖于人工经验,很可能因为人工疏忽存在疾病漏诊。为此,构建一个多模态猪病知识图谱,帮助管理者更好地理解猪只间的关联关系,为后续有效识别潜在的疾病传播路径和异常情况提供良好的数据基础。首先,从不同来源获取猪病数据,经过知识抽取以及图像匹配后初步构建两个多模态猪病知识图谱;其次,提出基于“图像-文本”间关联增强的多模态融合方法,利用多头注意力机制学习图像与文本之间的语义关联,通过减少猪病视觉模态模糊问题带来的负面作用,以增强猪病实体的向量表征;最后,基于对实体向量表征相似度的计算,融合两个多模态数据集中的猪病实体,以形成一个知识完备性更高的猪病知识图谱。实验表明,本文提出的多模态融合方法在猪病实体对齐任务上取得了优异的性能,相较于现有方法,对齐准确性(Hits@1)提升0.033,在通用数据集DBPZH-EN、DBPFR-EN、DBPJA-EN上进行实验验证,对齐准确性分别提升0.152、0.236、0.180,证明了该方法在多模态知识图谱融合方面的有效性。

    Abstract:

    Traditional swine disease prevention primarily relies on human expertise, which risks missed diagnoses due to human error. To address this challenge, a multi-modal swine disease knowledge graph was developed to assist managers in better understanding the connections between pigs, providing a solid data foundation for identifying potential disease transmission paths and anomalies. Firstly, the swine disease data from various sources were collected, and then two preliminary multi-modal knowledge graphs were constructed through knowledge extraction and image matching. Secondly, a multi-modal knowledge graph fusion method based on “image-text” association was proposed, using a multi-head attention mechanism to reduce the impact of visual ambiguity and enhance swine disease entity representation. Finally, by calculating the similarity of entity representations in vector space, entities from the two multi-modal datasets were integrated into a more comprehensive knowledge graph. Experiments demonstrated that the proposed method improved alignment accuracy, as reflected by a 0.033 increase in Hits@1 compared with that of existing methods. Additional accuracy gains of 0.152, 0.236 and 0.180 were observed on the DBPZH-EN, DBPFR-EN, and DBPJA-EN datasets respectively, demonstrating its effectiveness in multi-modal knowledge graph fusion.

    参考文献
    相似文献
    引证文献
引用本文

蒋婷婷,徐澳,吴飞飞,杨帅,何进,辜丽川.基于“图像-文本”间关联增强的多模态猪病知识图谱融合方法[J].农业机械学报,2025,56(1):56-64. JIANG Tingting, XU Ao, WU Feifei, YANG Shuai, HE Jin, GU Lichuan.“Image-Text” Association Enhanced Multi-modal Swine Disease Knowledge Graph Fusion[J]. Transactions of the Chinese Society for Agricultural Machinery,2025,56(1):56-64.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-11-01
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-01-10
  • 出版日期:
文章二维码