基于深度强化学习的猪舍环境控制策略优化与能耗分析
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金面上项目(32072787、32372934)、东北农业大学东农学者计划项目(19YJXG02)和黑龙江省博士后项目(LBH-Q21070)


Pig Building Environment Optimization Control and Energy Consumption Analysis Based on Deep Reinforcement Learning
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    在规模化的生猪养殖生产中,环境质量对于猪群的健康及生长发育至关重要。为实现猪舍环境精准调控,以STM32单片机为核心,构建了基于物联网的猪舍环境智能控制系统;同时提出了基于双深度Q网络(Double deep Q-Network,Double DQN)的猪舍环境优化控制策略。通过在实际猪舍中运行结果表明,舍内平均温度和相对湿度可控制在(20.53±1.72)℃和(74.16±7.84)%。与传统基于温度阈值的控制策略相比,基于Double DQN控制策略的舍内温度、相对湿度、NH3浓度和CO2浓度更接近期望值(期望温度为19℃,相对湿度为75%,NH3浓度(体积比)为10μL/L,CO2浓度(体积比)为800μL/L),舍内温度和相对湿度最大相对误差分别低于温度阈值控制策略3.7%和2.5%。此外,该系统传感器监测数据上传和控制指令下发的平均延迟时间分别为226ms和140.4ms,监测与控制延迟较小,稳定性较强。在Double DQN控制策略下,一天内3台风机总运行时长为28.01h,总耗电量为11.4kW·h,相较于传统温度阈值法可节省约7.39%。因此,本文构建的融合深度强化学习策略的控制系统有助于改善猪舍环境质量,提高养殖环境的自动化及智能化控制水平。

    Abstract:

    In large-scale pig farms, environmental quality is critical for the health and growth of pigs. To achieve optimal and real-time regulation of the pig building environment, an IoT-based pig building environment control system was developed by using an STM32 microcontroller as the core controller. The system included a PC terminal and an APP remote monitoring platform as well as a touch screen on-site monitoring platform, that can realize real-time control of pig building environment. Meanwhile, an optimal control strategy for pig building environment based on double deep Q-Network (Double DQN) was proposed. It was shown that the average temperature and relative humidity could be controlled at (20.53±1.72)℃ and (74.16±7.84)%. Compared with the control strategy on a single parameter of temperature, the temperature, relative humidity, NH3 concentration, and CO2 concentration in the pig building under the control of Double DQN were closer to the expected value (temperature was 19℃, relative humidity was 75%, NH3 concentration was 10μL/L, and CO2 concentration was 800μL/L). The maximum relative error of indoor temperature and relative humidity under the Double DQN control strategy were 3.7% and 2.5% lower than that under the temperature threshold control strategy, respectively. Furthermore, the average delay of sensor data upload and control instruction delivery were 226ms and 140.4ms, respectively, which achieved the control ability of small monitoring and control delay and high stability. Under the Double DQN control strategy, the total operation time of three fans in one day was 28.01h, and the total power consumption was 11.4kW·h, which could save about 7.39% of the power consumption compared with that of the traditional temperature threshold method. Therefore, the proposed IoT-based control system integrated with deep reinforcement learning strategy was helpful to improve the environmental quality of pig building and improve the level of automation and intelligent control of breeding environment.

    参考文献
    相似文献
    引证文献
引用本文

谢秋菊,王圣超,MUSABIMANA J,郭玉环,刘洪贵,包军.基于深度强化学习的猪舍环境控制策略优化与能耗分析[J].农业机械学报,2023,54(11):376-384,430. XIE Qiuju, WANG Shengchao, MUSABIMANA J, GUO Yuhuan, LIU Honggui, BAO Jun. Pig Building Environment Optimization Control and Energy Consumption Analysis Based on Deep Reinforcement Learning[J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(11):376-384,430.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-05-12
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-11-10
  • 出版日期: