Dettagli
Raccomandazioni
v1.0
SPO-SDXL_4k-p_10ep_LoRA_webui

SPO-SDXL_4k-p_10ep_LoRA_webui

206.0K
220
2.6K
#Base Model

Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step

Arxiv Paper

Github Code

Project Page

Abstract

Recently, Direct Preference Optimization (DPO) has extended its success from aligning large language models (LLMs) to aligning text-to-image diffusion models with human preferences. Unlike most existing DPO methods that assume all diffusion steps share a consistent preference order with the final generated images, we argue that this assumption neglects step-specific denoising performance and that preference labels should be tailored to each step's contribution.

To address this limitation, we propose Step-aware Preference Optimization (SPO), a novel post-training approach that independently evaluates and adjusts the denoising performance at each step, using a step-aware preference model and a step-wise resampler to ensure accurate step-aware supervision. Specifically, at each denoising step, we sample a pool of images, find a suitable win-lose pair, and, most importantly, randomly select a single image from the pool to initialize the next denoising step. This step-wise resampler process ensures the next win-lose image pair comes from the same image, making the win-lose comparison independent of the previous step. To assess the preferences at each step, we train a separate step-aware preference model that can be applied to both noisy and clean images.

Our experiments with Stable Diffusion v1.5 and SDXL demonstrate that SPO significantly outperforms the latest Diffusion-DPO in aligning generated images with complex, detailed prompts and enhancing aesthetics, while also achieving more than 20× times faster in training efficiency. Code and model: https://rockeycoss.github.io/spo.github.io/

Model Description

This model is fine-tuned from stable-diffusion-xl-base-1.0. It has been trained on 4,000 prompts for 10 epochs. This checkpoint is a LoRA checkpoint. For more information, please visit here

Citation

If you find our work useful, please consider giving us a star and citing our work.

@article{liang2024step,
  title={Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step},
  author={Liang, Zhanhao and Yuan, Yuhui and Gu, Shuyang and Chen, Bohan and Hang, Tiankai and Li, Ji and Zheng, Liang},
  journal={arXiv preprint arXiv:2406.04314},
  year={2024}
}

Visualizza la traduzione

Valutazioni e recensioni

-- /5
0 valutazioni

Non ancora ricevute valutazioni o commenti sufficienti

no-data
Nessun dato disponibile
R
Conversazione con il modello
Annuncio
2024-06-12
Pubblicare modello
2024-06-20
Aggiorna le informazioni del modello
Dettagli modello
Tipo
LORA
Data di pubblicazione
2024-06-12
Modello Base
SDXL 1.0
Ambito di Licenza
Model Source: civitai

1. I diritti dei modelli ripubblicati appartengono ai creatori originali.

2. I creatori originali che desiderano reclamare il proprio modello devono contattare il personale di SeaArt AI tramite canali ufficiali. Clicca per rivendicare

Ambito di Licenza di Creazione
Immagini Online
Effettua una Fusione
Consenti Download
Licenza Commerciale
Le immagini generate possono essere vendute o utilizzate per scopi commerciali
Consenti la rivendita del modello o la vendita dopo la fusione
QR Code
Scarica l'app SeaArt
Continua il tuo viaggio di creazione AI su mobile