Diversity is Definitely Needed: Improving Model-Agnostic Zero-shot Classification via Stable Diffusion

From MaRDI portal



DOI10.5281/zenodo.10558823Zenodo10558823MaRDI QIDQ6694279FDOQ6694279

Dataset published at Zenodo repository.

Kien Nguyen Thanh, Jordan Shipard, Clinton Fookes, Arnold Wiliem, Wei Xiang

Publication date: 17 April 2023

Copyright license: Creative Commons Attribution 4.0 International



In this work, we investigate the problem of Model-Agnostic Zero-Shot Classification (MA-ZSC), which refers to training non-specific classification architectures (downstream models) to classify real images without using any real images during training. Recent research has demonstrated that generating synthetic training images using diffusion models provides a potential solution to address MA-ZSC. However, the performance of this approach currently falls short of that achieved by large-scale vision-language models. One possible explanation is a potential significant domain gap between synthetic and real images. Our work offers a fresh perspective on the problem by providing initial insights that MA-ZSC performance can be improved by improving the diversity of images in the generated dataset. We propose a set of modifications to the text-to-image generation process using a pre-trained diffusion model to enhance diversity, which we refer to as our bag of tricks. Our approach shows notable improvements in various classification architectures, with results comparable to state-of-the-art models such as CLIP. To validate our approach, we conduct experiments on CIFAR10, CIFAR100, and EuroSAT, which is particularly difficult for zero-shot classification due to its satellite image domain. We evaluate our approach with five classification architectures, including ResNet and ViT. Our findings provide initial insights into the problem of MA-ZSC using diffusion models.







This page was built for dataset: Diversity is Definitely Needed: Improving Model-Agnostic Zero-shot Classification via Stable Diffusion