Can We Find Strong Lottery Tickets in Generative Models? (AAAI'23)
Sangyeop Yeo1
Yoojin Jang1
1Ulsan National Institute of Science & Technology
2University of Wisconsin-Madison
3NAVER AI Lab

Abstract

Yes. In this paper, we investigate strong lottery tickets in generative models, the subnetworks that achieve good generative performance without any weight update. Neural network pruning is considered the main cornerstone of model compression for reducing the costs of computation and memory. Unfortunately, pruning a generative model has not been extensively explored, and all existing pruning algorithms suffer from excessive weight-training costs, performance degradation, limited generalizability, or complicated training. To address these problems, we propose to find a strong lottery ticket via moment-matching scores. Our experimental results show that the discovered subnetwork can perform similarly or better than the trained dense model even when only 10% of the weights remain. To the best of our knowledge, we are the first to show the existence of strong lottery tickets in generative models and provide an algorithm to find it stably.


Results

The sparsity "0.k" means the ratio of remaining weights so if the sparsity is higher, the ratio of remaining weights is higher.

GFMN - LSUN Bedroom Dataset

BigGAN - CelebA Dataset

SNGAN - CelebA Dataset


Paper and Supplementary Material

Sangyeop Yeo, Yoojin Jang, Jy-yong Sohn, Dongyoon Han, Jaejun Yoo.
Can We Find Strong Lottery Tickets in Generative Models?
AAAI 2023 Accepted
Arxiv [Link]



Acknowledgements

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2.220574.01), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-01336, Artificial Intelligence Graduate School Program (UNIST)), and Institue of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00959, (Part 2) Few-Shot Learning of Causal Inference in Vision and Language for Decision Making). And This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.