详情
推薦
v1.0
SD1.5 Direct Preference Optimization - DPO

SD1.5 Direct Preference Optimization - DPO

54
15
81
#基礎模型
#basemodel
#資料保護員

Not my model, from the huggingface repo. This is an excellent merge model, particularly in the middle blocks. Try it yourself - take your favorite model, and block merge this at about 10% input, and 20% middle, and adjust from there.

Original U-Net: https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1

bdsqlz's release: https://huggingface.co/bdsqlsz/dpo-sd-text2image-v1-fp16

bdsqlz released the sdxl model here: https://civitai.com/models/237681/dpo-sdxl-fp16 but us poor 1.5 users were left in the dark ages.

I had to do some hacking to get the fp32 version, so you will have to bring your own VAE.

Diffusion Model Alignment Using Direct Preference Optimization

Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check paper at Diffusion Model Alignment Using Direct Preference Optimization.

SD1.5 model is fine-tuned from stable-diffusion-v1-5 on offline human preference data pickapic_v2.

SDXL model is fine-tuned from stable-diffusion-xl-base-1.0 on offline human preference data pickapic_v2.

查看译文

評分與評論

-- /5
0 個評分

尚未收到足夠的評分或評論

no-data
暫無數據
P
pyn
3
73
與模型對話
公告
2023-12-22
发布模型
2023-12-22
更新模型資訊
模型详情
類型
Checkpoint
发布時間
2023-12-22
基础模型
SD 1.5
许可范围
來源: civitai

1. 轉载模型僅供學習與交流分享,其版權及最終解释權归原作者。

2. 模型原作者如需認領模型,請通過官方渠道联系海藝AI工作人員進行認證。我們致力於保護每一位創作者的權益。 點擊去認領

創作许可范围
在線生圖
進行融合
允许下载
商業许可范围
生成圖片可出售或用於商業目的
允许模型轉售或融合后出售
QR Code
下載SeaArt App
在移動端继續你的AI創作之旅