[1] 徐勇, 乔茹飞. 基于CycleGAN的脏污图像数据增强方法研究[J]. 信息技术与信息化, 2024, (12): 155-158.
[2] 郭华. 基于生成对抗网络的图像修复技术研究[J]. 长江信息通信, 2024, 37(12): 64-66.
[3] Kumar A, Soni S, Chauhan S, et al. Navigating the realm of generative models: GANs, diffusion, limitations, and future prospects: A review[C] // Proceedings of Fifth International Conference on Computing, Communications, and Cyber-Security. Singapore: Springer, 2024: 301-319.
[4] Gui J, Sun Z, Wen Y, et al. A review on generative adversarial networks: Algorithms, theory, and applications[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(4): 3313-3332.
[5] 林志坤, 许建龙, 包晓安. 基于STGAN的人脸属性编辑改进模型[J]. 浙江理工大学学报(自然科学), 2023, 49(3): 285-292.
[6] 黄超, 胡勤友, 黄子硕. 基于改进CycleGAN的水上图像去雾算法[J]. 上海海事大学学报, 2025, 46(1): 17-22, 111.
[7] Chan E R, Monteiro M, Kellnhofer P, et al. Pi-GAN: Periodic implicit generative adversarial networks for 3D-aware image synthesis[C] // 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 20-25, 2021, Nashville, TN, USA. IEEE, 2021: 5795-5805.
[8] Nirkin Y, Keller Y, Hassner T. FSGANv2: Improved subject agnostic face swapping and reenactment[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 560-575.
[9] Hou H, Xu J, Hou Y K, et al. Semi-cycled generative adversarial networks for real-world face super-resolution[J]. IEEE Transactions on Image Processing, 2023, 32: 1184-1199.
[10] Wen L, Wang Y, Li X Y. A new cycle-consistent adversarial networks with attention mechanism for surface defect classification with small samples[J]. IEEE Transactions on Industrial Informatics, 2022, 18(12): 8988-8998.
[11] 包晓安, 高春波, 张娜, 等. 基于生成对抗网络的图像超分辨率方法[J]. 浙江理工大学学报(自然科学版), 2019, 41(4): 499-508.
[12] 汤健, 郭海涛, 夏恒, 等. 面向工业过程的图像生成及其应用研究综述[J]. 自动化学报, 2024, 50(2): 211-240.
[13] 孙增国, 彭学俊, 刘慧霞, 等. 基于自注意力机制和CycleGAN的高分三号ScanSAR图像的扇贝效应抑制[J]. 光电子·激光, 2023, 34(12): 1279-1287.
[14] Wu K, Huang J, Ma Y, et al. Cycle-retinex: Unpaired low-light image enhancement via retinex-inline CycleGAN[J]. IEEE Transactions on Multimedia, 2024, 26: 1213-1228.
[15] Zhou Y F, Jiang R H, Wu X, et al. BranchGAN: Unsupervised mutual image-to-image transfer with a single encoder and dual decoders[J]. IEEE Transactions on Multimedia, 2019, 21(12): 3136-3149.
[16] Yang C, Shen Y, Zhou B. Semantic hierarchy emerges in deep generative representations for scenes synthesis[J]. International Journal of Computer Vision, 2021, 129(5): 1451-1466.
[17] 张曦, 库少平. 基于生成对抗网络的人脸超分辨率重建方法[J]. 吉林大学学报(工学版), 2025, 55(1): 333-338.
[18] Sloboda T, Hudec L, Benešovič W. xAI-CycleGAN, a cycle-consistent generative assistive network[C] // Computer Vision Systems. ICVS2023. Cham: Springer, 2023: 403-411.
[19] Woo S, Debnath S, Hu R, et al. ConvNeXtV2: Co-designing and scaling convnets with masked autoencoders[EB/OL]. (2023-01-02)[2025-07-15]. https://arxiv.org/abs/2301.00808.
[20] Jabeen S, Li X, Amin M S, et al. A review on methods and applications in multimodal deep learning[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2023, 19(2s): 1-41.
[21] Li H, Wang L, Liu J. A review of deep learning-based image style transfer research[J]. The Imaging Science Journal, 2025, 73(4): 504-526.
[22] Choi Y, Uh Y, Yoo J, et al. StarGAN v2: Diverse image synthesis for multiple domains[C] // 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 13-19, 2020, Seattle, WA, USA. IEEE, 2020: 8185-8194.