Abstract
Low-quality or scarce data has posed significant challenges for training deep neural networks in practice. While classical data augmentation cannot contribute very different new data, diffusion models opens up a new door to build self-evolving AI by generating high-quality and diverse synthetic data through text-guided prompts. However, text-only guidance cannot control synthetic images' proximity to the original images, resulting in out-of-distribution data detrimental to the model performance. To overcome the limitation, we study image guidance to achieve a spectrum of interpolations between synthetic and real images. With stronger image guidance, the generated images are similar to the training data but hard to learn. While with weaker image guidance, the synthetic images will be easier for model but contribute to a larger distribution gap with the original data. The generated full spectrum of data enables us to build a novel "Diffusion Curriculum (DisCL)". DisCL adjusts the image guidance level of image synthesis for each training stage: It identifies and focuses on hard samples for the model and assesses the most effective guidance level of synthetic images to improve hard data learning. We apply DisCL to two challenging tasks: long-tail (LT) classification and learning from low-quality data. It focuses on lower-guidance images of high-quality to learn prototypical features as a warm-up of learning higher-guidance images that might be weak on diversity or quality. Extensive experiments showcase a gain of 2.7% and 2.1% in OOD and ID macro-accuracy when applying DisCL to iWildCam dataset. On ImageNet-LT, DisCL improves the base model's tail-class accuracy from 4.4% to 23.64% and leads to a 4.02% improvement in all-class accuracy.
Method

DisCL includes two phases: (Phase 1) Syn-to-Real Data Generation & (Phase 2) Generative Curriculum Learning.
In Phase 1, we identify “hard” samples in the training set and use them as guidance to generate a spectrum of synthetic-to-real images by varying image guidance strength $\lambda$.
In Phase 2, a curriculum (Adaptive or Non-Adaptive) selects guidance levels $\lambda_i$ at each stage. Adaptive schedules maximize expected progress, while Non-Adaptive follows a preset plan. Then according to corresponding schedules, selected synthetic data is combined with real samples to train the model.

Synthetic images on ImageNet-LT dataset.

Synthetic images on ImageNet-LT dataset.

Synthetic images on ImageNet-LT dataset.

Synthetic images on iWildCam dataset.

Synthetic images on iWildCam dataset.

Synthetic images on iWildCam dataset.

Synthetic images on iWildCam dataset.

Synthetic images on CIFAR-100 dataset.

Synthetic images on iNaturalist dataset.
Curriculum Strategies

Non-adaptive Curriculum Strategy:
First expose the model to diverse synthetic images of tail classes, and then progressively shift to a task-specific distribution that resembles the original images.

Adaptive Curriculum Strategy:
Selects the image guidance level $\lambda$ at each epoch based on progress (defined by improvement in ground-truth class confidence on validation subsets corresponding to each $\lambda$). The guidance level with the highest progress is chosen for the next epoch's training.
BibTeX
@inproceedings{liang-bhardwaj-zhou-2024-discl,
title = "Diffusion Curriculum: Synthetic-to-Real Data Curriculum via Image-Guided Diffusion",
author = "Liang, Yijun and Bhardwaj, Shweta and Zhou, Tianyi",
booktitle = "International Conference on Computer Vision (ICCV)",
year = "2025",
}