[1] 张强, 李嘉锋, 卓力. 车辆识别技术综述[J]. 北京工业大学学报, 2018, 44(3):382-392.
[2] Wang H B, Hou J Y, Chen N. A survey of vehicle reidentification based on deep learning[J]. IEEE Access, 2019, 7: 172443-172469.
[3] 邱铭凯, 李熙莹. 用于车辆重识别的基于细节感知的判别特征学习模型[J/OL]. 中山大学学报(自然科学版):1-10. (2021-03-16)[20210406].https://doi.org/1013471/j.cnki.acta.snus.2020-03-16-2020B023.
[4] Xie Y, Zhu J Q, Zeng H Q, et al. Learning matching behavior differences for compressing vehicle reidentification models[C]// IEEE International Conference on Visual Communications and Image Processing, Macau, China: IEEE, 2020: 523-526.
[5] 刘凯, 李浥东, 林伟鹏. 车辆再识别技术综述[J]. 智能科学与技术学报, 2020, 2(1):10-25.
[6] Cormier M, Sommer L W, Teutsch M. Low resolution vehicle reidentification based on appearance features for wide area motion imagery[C]// IEEE Winter Applications of Computer Vision Workshops. Lake Placid, NY, USA: IEEE, 2016: 1-7.
[7] Charbonnier S, Pitton A C, Vassilev A. Vehicle reidentification with a single magnetic sensor[C]// IEEE International Instrumentation and Measurement Technology Conference Proceedings. Graz, Austria: IEEE, 2012: 380-385.
[8] He B, Li J, Zhao Y F, et al. Partregularized nearduplicate vehicle reidentification[C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 3992-4000.
[9] Liu H Y, Tian Y H, Wang Y W, et al. Deep relative distance learning: tell the difference between similar vehicles[C]// IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 2167-2175.
[10] Liu X C, Liu W, Mei T, et al. A deep learningbased approach to progressive vehicle reidentification for urban surveillance[C]// European Conference on Computer Vision (ECCV 2016). Berlin, Germany: Springer, 2016: 869-884.
[1]邓远远,沈炜.基于注意力反馈机制的深度图像标注模型[J].浙江理工大学学报,2019,41-42(自科二):208.
DENG Yuanyuan,SHEN Wei.Depth image caption model based on attention feedback mechanism[J].Journal of Zhejiang Sci-Tech University,2019,41-42(自科五):208.
[2]邓远远,沈炜.基于注意力反馈机制的深度图像标注模型[J].浙江理工大学学报,2019,41-42(自科二):208.
DENG Yuanyuan,SHEN Wei.Depth image caption model based on attention feedback mechanism[J].Journal of Zhejiang Sci-Tech University,2019,41-42(自科五):208.
[3]陈巧红,王磊,孙麒,等.基于混合神经网络的中文短文本分类模型[J].浙江理工大学学报,2019,41-42(自科四):509.
CHEN Qiaohong,WANG Lei,SUN Qi,et al.Chinese short text classification model based on hybrid neural network[J].Journal of Zhejiang Sci-Tech University,2019,41-42(自科五):509.
[4]王翔,任佳.基于多注意力机制的深度神经网络故障诊断算法[J].浙江理工大学学报,2020,43-44(自科二):224.
WANG Xiang,REN Jia.Deep neural network fault diagnosis algorithm based on multiattention mechanism[J].Journal of Zhejiang Sci-Tech University,2020,43-44(自科五):224.
[5]陈巧红,于泽源,孙麒,等.基于注意力机制与LSTM的语音情绪识别[J].浙江理工大学学报,2020,43-44(自科六):815.
CHEN Qiaohong,YU Zeyuan,SUN Qi,et al.Speech emotion recognition based on attention mechanism and LSTM[J].Journal of Zhejiang Sci-Tech University,2020,43-44(自科五):815.
[6]王卓英,童基均,蒋路茸,等.基于U Dense net网络的DSA图像冠状动脉血管分割[J].浙江理工大学学报,2021,45-46(自科三):390.
WANG Zhuoying,TONG Jijun,JIANG Lurong,et al.Coronary artery segmentation of DSA images based on UDensenet network[J].Journal of Zhejiang Sci-Tech University,2021,45-46(自科五):390.
[7]陈巧红,李妃玉,贾宇波,等.基于自注意力和门控机制的答案选择模型[J].浙江理工大学学报,2021,45-46(自科三):400.
CHEN Qiaohong,LI Feiyu,JIA Yubo,et al.Answer selection model based on selfattention and gating mechanism[J].Journal of Zhejiang Sci-Tech University,2021,45-46(自科五):400.
[8]陈巧红,孙佳锦,孙麒,等.基于多层跨模态注意力融合的图文情感分析[J].浙江理工大学学报,2022,47-48(自科一):85.
CHEN Qiaohong,SUN Jiajin,SUN Qi,et al.Imagetext sentiment analysis based on multilayer crossmodal attention fusion[J].Journal of Zhejiang Sci-Tech University,2022,47-48(自科五):85.
[9]潘海鹏,郝慧,苏雯.基于注意力机制与多尺度特征融合的人脸表情识别[J].浙江理工大学学报,2022,47-48(自科三):382.
PAN Haipeng,HAO Hui,SU Wen.Facial expression recognition based on attention mechanism and multiscale feature fusion[J].Journal of Zhejiang Sci-Tech University,2022,47-48(自科五):382.
[10]潘海鹏,刘培敏,马淼.基于语义信息与动态特征点剔除的SLAM算法[J].浙江理工大学学报,2022,47-48(自科五):764.
PAN Haipeng,LIU Peimin,MA Miao.SLAM algorithm based on semantic information and the elimination of dynamic feature points[J].Journal of Zhejiang Sci-Tech University,2022,47-48(自科五):764.