|本期目录/Table of Contents|

[1]王迎铭,陈柯烽,潘海鹏,等.WMG-GAN:基于权重图引导的布匹瑕疵图像生成算法[J].浙江理工大学学报,2026,55-56(自科一):114-124.
 WANG Yingming,CHEN Kefeng,PAN Haipeng,et al.WMG-GAN: Weight-map-guided fabric defect image generation algorithm[J].Journal of Zhejiang Sci-Tech University,2026,55-56(自科一):114-124.
点击复制

WMG-GAN:基于权重图引导的布匹瑕疵图像生成算法()
分享到:

浙江理工大学学报[ISSN:1673-3851/CN:33-1338/TS]

卷:
55-56
期数:
2026年自科第一期
页码:
114-124
栏目:
出版日期:
2026-01-10

文章信息/Info

Title:
WMG-GAN: Weight-map-guided fabric defect image generation algorithm
文章编号:
1673-3851(2026) 01-0114-11
作者:
王迎铭陈柯烽 潘海鹏 任佳
1. 浙江理工大学信息科学与工程学院 ,杭州 310018;2. 浙江理工大学常山研究院有限公司 ,浙江衢州 324299
Author(s):
WANG Yingming CHEN Kefeng PAN Haipeng REN Jia
1. School of Information Science and Engineering, Zhejiang Sci-Tech University, Hangzhou 310018, China; 2. Changshan Research Institute Co., Ltd. of Zhejiang Sci-Tech University, Quzhou 324299, China
关键词:
布匹瑕疵图像生成生成对抗网络CycleGAN权重图ConvNeXtV2
分类号:
TP183
文献标志码:
A
摘要:
针对现有方法在重建背景细节和生成图像质量方面存在的不足 , 以 CycleGAN 为基础框架 ,提出了一种基于权重图引导的布匹瑕疵图像生成算法 WMG-GAN (Weight-map-guided generative adversarialnetwork) 。该算法首先通过生成器产生前景权重图和特征权重图 , 实现针对前景部分内容的选择性修改 ,并完整保留背景细节和结构;其次 ,在判别器中加入 ConvNeXtV2模块 ,增加网络的特征提取能力 ,为生成器提供更精确的梯度反馈;最后 ,引入感知学习图像块相似性(Learned perceptualimagepatch similarity, LPIPS) 指标 ,构建循环一致性损失函数 , 以优化生成图像的视觉质量与真实感 。在真实布匹瑕疵数据集上的对照实验和消融实验表明 ,该算法生成的布匹瑕疵图像相较于传统 CycleGAN,不仅具有较低的弗雷歇初始距离(Frchetinception distance, FID) 和 LPIPS值 , 而且能获得较高的结构相似性指数(Structuralsimilarityindexmeasure, SSIM) 和峰值信噪比(Peak signal-to-noiseratio, PSNR) 。WMG-GAN算法可显著提升图像生成质量 , 由其生成的图像满足瑕疵检测算法的高精度要求。

参考文献/References:

[1] 徐勇, 乔茹飞. 基于CycleGAN的脏污图像数据增强方法研究[J]. 信息技术与信息化, 2024, (12): 155-158.
[2] 郭华. 基于生成对抗网络的图像修复技术研究[J]. 长江信息通信, 2024, 37(12): 64-66.
[3] Kumar A, Soni S, Chauhan S, et al. Navigating the realm of generative models: GANs, diffusion, limitations, and future prospects: A review[C] // Proceedings of Fifth International Conference on Computing, Communications, and Cyber-Security. Singapore: Springer, 2024: 301-319.
[4] Gui J, Sun Z, Wen Y, et al. A review on generative adversarial networks: Algorithms, theory, and applications[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(4): 3313-3332.
[5] 林志坤, 许建龙, 包晓安. 基于STGAN的人脸属性编辑改进模型[J]. 浙江理工大学学报(自然科学), 2023, 49(3): 285-292.
[6] 黄超, 胡勤友, 黄子硕. 基于改进CycleGAN的水上图像去雾算法[J]. 上海海事大学学报, 2025, 46(1): 17-22, 111.
[7] Chan E R, Monteiro M, Kellnhofer P, et al. Pi-GAN: Periodic implicit generative adversarial networks for 3D-aware image synthesis[C] // 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 20-25, 2021, Nashville, TN, USA. IEEE, 2021: 5795-5805.
[8] Nirkin Y, Keller Y, Hassner T. FSGANv2: Improved subject agnostic face swapping and reenactment[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 560-575.
[9] Hou H, Xu J, Hou Y K, et al. Semi-cycled generative adversarial networks for real-world face super-resolution[J]. IEEE Transactions on Image Processing, 2023, 32: 1184-1199.
[10] Wen L, Wang Y, Li X Y. A new cycle-consistent adversarial networks with attention mechanism for surface defect classification with small samples[J]. IEEE Transactions on Industrial Informatics, 2022, 18(12): 8988-8998.
[11] 包晓安, 高春波, 张娜, 等. 基于生成对抗网络的图像超分辨率方法[J]. 浙江理工大学学报(自然科学版), 2019, 41(4): 499-508.
[12] 汤健, 郭海涛, 夏恒, 等. 面向工业过程的图像生成及其应用研究综述[J]. 自动化学报, 2024, 50(2): 211-240.
[13] 孙增国, 彭学俊, 刘慧霞, 等. 基于自注意力机制和CycleGAN的高分三号ScanSAR图像的扇贝效应抑制[J]. 光电子·激光, 2023, 34(12): 1279-1287.
[14] Wu K, Huang J, Ma Y, et al. Cycle-retinex: Unpaired low-light image enhancement via retinex-inline CycleGAN[J]. IEEE Transactions on Multimedia, 2024, 26: 1213-1228.
[15] Zhou Y F, Jiang R H, Wu X, et al. BranchGAN: Unsupervised mutual image-to-image transfer with a single encoder and dual decoders[J]. IEEE Transactions on Multimedia, 2019, 21(12): 3136-3149.
[16] Yang C, Shen Y, Zhou B. Semantic hierarchy emerges in deep generative representations for scenes synthesis[J]. International Journal of Computer Vision, 2021, 129(5): 1451-1466.
[17] 张曦, 库少平. 基于生成对抗网络的人脸超分辨率重建方法[J]. 吉林大学学报(工学版), 2025, 55(1): 333-338.
[18] Sloboda T, Hudec L, Benešovič W. xAI-CycleGAN, a cycle-consistent generative assistive network[C] // Computer Vision Systems. ICVS2023. Cham: Springer, 2023: 403-411.
[19] Woo S, Debnath S, Hu R, et al. ConvNeXtV2: Co-designing and scaling convnets with masked autoencoders[EB/OL]. (2023-01-02)[2025-07-15]. https://arxiv.org/abs/2301.00808.
[20] Jabeen S, Li X, Amin M S, et al. A review on methods and applications in multimodal deep learning[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2023, 19(2s): 1-41.
[21] Li H, Wang L, Liu J. A review of deep learning-based image style transfer research[J]. The Imaging Science Journal, 2025, 73(4): 504-526.
[22] Choi Y, Uh Y, Yoo J, et al. StarGAN v2: Diverse image synthesis for multiple domains[C] // 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 13-19, 2020, Seattle, WA, USA. IEEE, 2020: 8185-8194.

相似文献/References:

[1]包晓安,高春波,张娜,等.基于生成对抗网络的图像超分辨率方法[J].浙江理工大学学报,2019,41-42(自科四):499.
 BAO Xiaoan,GAO Chunbo,ZHANG Na,et al.Image superresolution method based ongenerative adversarial network[J].Journal of Zhejiang Sci-Tech University,2019,41-42(自科一):499.
[2]林志坤,许建龙,包晓安.基于STGAN的人脸属性编辑改进模型[J].浙江理工大学学报,2023,49-50(自科三):285.
 LIN Zhikun,XU Jianlong,BAO Xiaoan.Improved model of face attribute editing based on STGAN[J].Journal of Zhejiang Sci-Tech University,2023,49-50(自科一):285.
[3]丁焕,侯珏,杨阳,等.基于三维人体姿态信息的协作式扩散试衣生成网络[J].浙江理工大学学报,2024,51-52(自科五):691.
 DING Huan,HOU Jue,YANG Yang,et al.A collaborative diffusion based network for virtual try on based on 3D human pose information[J].Journal of Zhejiang Sci-Tech University,2024,51-52(自科一):691.
[4]王罕仁,张华熊.基于生成对抗网络与稳定扩散模型的花卉丝巾图案生成方法[J].浙江理工大学学报,2025,53-54(自科四):556.
 WANG Hanren,ZHANG Huaxiong.A generative method for floral scarf patterns using GANs and  stable diffusion models[J].Journal of Zhejiang Sci-Tech University,2025,53-54(自科一):556.

备注/Memo

备注/Memo:
基金项目 : 浙江省“尖兵”“领雁”研发攻关计划项目(2023C01062)收稿日期 : 2025-07-15 网络出版日期 : 2025-11-05
作者简介 : 王迎铭(2001— ) ,男 ,河南商丘人 ,硕士研究生 ,主要从事机器视觉、图像生成方面的研究。通信作者 : 任佳 ,E-mail:jren@zstu. edu. cn
更新日期/Last Update: 2026-01-08