|本期目录/Table of Contents|

[1]杨毅,柯俊.基于 CNN-BiLSTM 的可穿戴传感器数据犬类行为识别[J].浙江理工大学学报,2026,55-56(自科三):306-314.
 YANG Yi,KE Jun.CNN-BiLSTM-based dog activity detection using wearable sensor data[J].Journal of Zhejiang Sci-Tech University,2026,55-56(自科三):306-314.
点击复制

基于 CNN-BiLSTM 的可穿戴传感器数据犬类行为识别()

浙江理工大学学报[ISSN:1673-3851/CN:33-1338/TS]

卷:
55-56
期数:
2026年自科第三期
页码:
306-314
栏目:
出版日期:
2026-05-10

文章信息/Info

Title:
CNN-BiLSTM-based dog activity detection using wearable sensor data
文章编号:
1673-3851(2026) 05-0306-09
作者:
杨毅柯俊
浙江理工大学机械工程学院 ,杭州 310018
Author(s):
YANG YiKE Jun
School of Mechanical Engineering, Zhejiang Sci-Tech University, Hangzhou 310018, China
关键词:
犬类行为识别 卷积神经网络可穿戴传感器 深度学习双向长短期记忆网络鲁棒标准化
分类号:
TP183
文献标志码:
A
摘要:
针对犬类行为识别中可穿戴传感器数据特征提取困难、样本类别不平衡以及长时序依赖关系难捕捉等问题 ,提出了一种基于数据增强与 CNN-BiLSTM的混合识别方法 。该方法利用鲁棒标准化(Robustnormalization, RN)与主成分分析(Principalcomponentsanalysis,PCA)消除异常值并降低特征维度 ,通过重采样策略缓解类别不平衡问题 ;在此基础上 ,利用卷积神经网络(Convolutionalneuralnetwork,CNN)提取局部空间特征并抑制高频噪声 ,并引入双向长短期记忆网络(Bidirectionallong short-term memory,BiLSTM)构建双向时序依赖模型 。结果显示 :相较于单向 CNN-LSTM模型 ,CNN-BiLSTM模型的准确率与 F1分数提升了 2.3%和 2.1% ,特别是“玩耍”这一复杂行为的 F1分数提高了 26.0% ;相比于其他主流行为识别算法 ,CNN-BiLSTM 在处理 多达 9种行为类别的情况下 ,仍保持了较高的识别准确率 。该研究为基于可穿戴设备的犬类行为监测和识别提供了较为可靠的解决方案。

参考文献/References:

[1] Deng R, Zhou G, Tang L, et al. E-DOCRNet: a multi-feature fusion network for dog bark identification [J]. Applied Acoustics, 2024, 220: 109950.

[2] 刘艳秋,宣传忠,武佩,等。基于 K-means-BP 神经网络的舍饲环境母羊产前运动行为分类识别 [J]. 中国农业大学学报,2021, 26 (3): 86-95.
[3] Väätäjä H, Majaranta P, Isokoski P, et al. Happy dogs and happy owners: Using dog activity monitoring technology in everyday life [C]//Proceedings of the 5th International Conference on Animal-Computer Interaction. Atlanta, Georgia, USA: ACM, 2018: 1-12.
[4] 李玲。宠物狗可穿戴设备产品商业模式研究 [D]. 北京:北京邮电大学,2017: 13-17.
[5] Ladha C, Hammerla N, Hughes E, et al. Dog’s life: wearable activity recognition for dogs [C]//Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing. Zurich, Switzerland: ACM, 2013: 415-418.
[6] Gerencsér L, Vásárhelyi G, Nagy M, et al. Identification of behaviour in freely moving dogs (Canis familiaris) using inertial sensors [J]. PLoS One, 2013, 8 (10): e77814.
[7] Davoulos G, Lalakou I, Hatzilygeroudis I. Recognition of dog motion states: Ensemble vs deep learning models [C]//2024 15th International Conference on Information, Intelligence, Systems & Applications (IISA). Chania, Crete, Greece: IEEE, 2024: 1-8.
[8] Kumpulainen P, Cardà A V, Somppi S, et al. Dog behaviour classification with movement sensors placed on the harness and the collar [J]. Applied Animal Behaviour Science, 2021, 241: 105393.
[9] Marcato M, Tedesco S, O’Mahony C, et al. Machine learning based canine posture estimation using inertial data [J]. PLoS One, 2023, 18 (6): e0286311.
[10] Muminov A, Mukhiddinov M, Cho. Enhanced classification of dog activities with quaternion-based fusion approach on high-dimensional raw data from wearable sensors [J]. Sensors, 2022, 22 (23): 9471.
[11] 翟明欣。基于三维卷积神经网络的奶山羊行为识别方法研究 [D]. 杨凌:西北农林科技大学,2023: 28-31.
[12] 李晓莉,韩鹏,李晓光。基于典型样本的卷积神经网络技术 [J]. 计算机工程与设计,2020, 41 (4): 1113-1117.
[13] Amano R, Ma J. Recognition and changepoint detection of dogs’ activities of daily living using wearable devices [C]//2021 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on CyberScience and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). AB, Canada: IEEE, 2021: 693-699.
[14] Hussain A, Begum K, Armand T P T, et al. Long short-term memory (LSTM)-based dog activity detection using accelerometer and gyroscope [J]. Applied Sciences, 2022, 12 (19): 9427.
[15] Kim J, Moon N. Dog behavior recognition based on multimodal data from a camera and wearable device [J]. Applied Sciences, 2022, 12 (6): 3199.
[16] Vehkaoja A, Somppi S, Törnqvist H, et al. Description of movement sensor dataset for dog behavior classification [J]. Data in Brief, 2022, 40: 107822.

相似文献/References:

[1]李斯凡,高法钦.基于卷积神经网络的手写数字识别[J].浙江理工大学学报,2017,37-38(自科3):438.
 LI Sifan,GAO Faqin.Handwritten Numeral Recognition Based on Convolution Neural Network[J].Journal of Zhejiang Sci-Tech University,2017,37-38(自科三):438.
[2]张玮,张华熊.基于卷积神经网络的纺织面料主成分分类[J].浙江理工大学学报,2019,41-42(自科一):1.
 ZHANG Wei,ZHANG Huaxiong.Classification of main components of textile fabrics based on convolutional neural network[J].Journal of Zhejiang Sci-Tech University,2019,41-42(自科三):1.
[3]邓远远,沈炜.基于注意力反馈机制的深度图像标注模型[J].浙江理工大学学报,2019,41-42(自科二):208.
 DENG Yuanyuan,SHEN Wei.Depth image caption model based on  attention feedback mechanism[J].Journal of Zhejiang Sci-Tech University,2019,41-42(自科三):208.
[4]邓远远,沈炜.基于注意力反馈机制的深度图像标注模型[J].浙江理工大学学报,2019,41-42(自科二):208.
 DENG Yuanyuan,SHEN Wei.Depth image caption model based on attention feedback mechanism[J].Journal of Zhejiang Sci-Tech University,2019,41-42(自科三):208.
[5]陈巧红,董雯,孙麒,等.基于混合神经网络的单文档自动文摘模型[J].浙江理工大学学报,2019,41-42(自科四):489.
 CHEN Qiaohong,DONG Wen,SUN Qi,et al.Single document automatic summarization model based on hybrid neural network[J].Journal of Zhejiang Sci-Tech University,2019,41-42(自科三):489.
[6]陈巧红,王磊,孙麒,等.基于混合神经网络的中文短文本分类模型[J].浙江理工大学学报,2019,41-42(自科四):509.
 CHEN Qiaohong,WANG Lei,SUN Qi,et al.Chinese short text classification model based on hybrid neural network[J].Journal of Zhejiang Sci-Tech University,2019,41-42(自科三):509.
[7]程诚,任佳.基于自适应卷积核的改进CNN数值型数据分类算法[J].浙江理工大学学报,2019,41-42(自科五):657.
 CHENG Cheng,REN Jia.Improved CNN classification algorithm based on adaptive convolution kernel for numerical data[J].Journal of Zhejiang Sci-Tech University,2019,41-42(自科三):657.
[8]田秋红,孙文轩,章立早,等.基于改进GhostNet的轻量级手势图像识别方法[J].浙江理工大学学报,2023,49-50(自科三):300.
 TIAN Qiuhong,SUN Wenxuan,ZHANG Lizao,et al.Lightweight gesture image recognition method  based on improved GhostNet[J].Journal of Zhejiang Sci-Tech University,2023,49-50(自科三):300.
[9]祝鹏烜,黄体仁,李旭.MSAG-TransNet:肺部CT图像中新冠肺炎感染新型冠状病毒感染区域的分割模型[J].浙江理工大学学报,2023,49-50(自科六):734.
 ZHU Pengxuan,HUANG Tiren,LI Xu.MSAG-TransNet: Segmentation model of COVID-19 infected areas in lung CT images[J].Journal of Zhejiang Sci-Tech University,2023,49-50(自科三):734.
[10]祝亮亮,郭业才.基于双重注意力网络和内容修复损失的艺术风格迁移[J].浙江理工大学学报,2026,55-56(自科一):105.
 ZHU Liangliang,GUO Yecai.Artistic style transfer based on dual attention network and content restoration loss[J].Journal of Zhejiang Sci-Tech University,2026,55-56(自科三):105.

备注/Memo

备注/Memo:
基金项目 : 国家自然科学基金项目(52102430)收稿日期 : 2025-12-08 网络出版日期 : 2026-03-05
作者简介 : 杨 毅 (2001— ) ,男 ,江西上饶人 ,硕士研究生 ,主要从事智能信息处理方面的研究。通信作者: 柯 俊,E-mail:jlukejun@163. com0 引 言
更新日期/Last Update: 2026-05-07