Fractional Wavelet-Based Generative Scattering Networks - Université de Rennes Accéder directement au contenu
Article Dans Une Revue Frontiers in Neurorobotics Année : 2021

Fractional Wavelet-Based Generative Scattering Networks

Résumé

Generative adversarial networks and variational autoencoders (VAEs) provide impressive image generation from Gaussian white noise, but both are difficult to train, since they need a generator (or encoder) and a discriminator (or decoder) to be trained simultaneously, which can easily lead to unstable training. To solve or alleviate these synchronous training problems of generative adversarial networks (GANs) and VAEs, researchers recently proposed generative scattering networks (GSNs), which use wavelet scattering networks (ScatNets) as the encoder to obtain features (or ScatNet embeddings) and convolutional neural networks (CNNs) as the decoder to generate an image. The advantage of GSNs is that the parameters of ScatNets do not need to be learned, while the disadvantage of GSNs is that their ability to obtain representations of ScatNets is slightly weaker than that of CNNs. In addition, the dimensionality reduction method of principal component analysis (PCA) can easily lead to overfitting in the training of GSNs and, therefore, affect the quality of generated images in the testing process. To further improve the quality of generated images while keeping the advantages of GSNs, this study proposes generative fractional scattering networks (GFRSNs), which use more expressive fractional wavelet scattering networks (FrScatNets), instead of ScatNets as the encoder to obtain features (or FrScatNet embeddings) and use similar CNNs of GSNs as the decoder to generate an image. Additionally, this study develops a new dimensionality reduction method named feature-map fusion (FMF) instead of performing PCA to better retain the information of FrScatNets,; it also discusses the effect of image fusion on the quality of the generated image. The experimental results obtained on the CIFAR-10 and CelebA datasets show that the proposed GFRSNs can lead to better generated images than the original GSNs on testing datasets. The experimental results of the proposed GFRSNs with deep convolutional GAN (DCGAN), progressive GAN (PGAN), and CycleGAN are also given.
Fichier principal
Vignette du fichier
Wu_Fractional.pdf (1.5 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03467433 , version 1 (02-06-2022)

Licence

Paternité

Identifiants

Citer

Jiasong Wu, Xiang Qiu, Jing Zhang, Fuzhi Wu, Youyong Kong, et al.. Fractional Wavelet-Based Generative Scattering Networks. Frontiers in Neurorobotics, 2021, 15, pp.752752. ⟨10.3389/fnbot.2021.752752⟩. ⟨hal-03467433⟩
27 Consultations
11 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More