详情
推薦
Blockwise_Base_Dev_UNET
Blkwise_Base_Schnell_UNET
T5xxl_BF16
CLIP_L_Large_BF16
Flux Blockwise
原創

Flux Blockwise

5.8K
83
202
#fp8
#flux.1
#flux1.d
#基礎模型
#FLUX

Flux Blockwise (Mixed Precision Model)

I had to build several custom tools to allow for the mixed precision model, to my knowledge it is the first built like this.


  • Faster and more accurate then any other FP8 quantized model currently available
  • Works in Comfy and Forge but forge needs to be set to BF16 UNET
  • Comfy load as a diffuser model USE DEFAULT WEIGHT
  • FP16 Upcasting should not be used unless absolutely necessary such as running CPU or IPEX
  • FORGE - set COMMANDLINE_ARGS= --unet-in-bf16 --vae-in-fp32
  • Other then the need to force forge into BF16, (FP32 VAE optionally) it should work the same as the DEV model with the added benefit of being 5GB smaller then the full BF16


It turns out that every quantized model including my own up to this point to my knowledge has been built UN-optimally per blackforest.


Only the UNET blocks should be quantized in the diffuser model, also they should be upcast to BF16 and not FP16 (Comfy does this correctly)





Hippo Image remix

Lion Image remix


I am currently trying to workout how to follow Blackforest recommendations but using GGUF

查看译文

評分與評論

5.0 /5
0 個評分

尚未收到足夠的評分或評論

avatar
avatar_frame
STRWHERE
127
1.6K
公告
2024-11-29
发布模型
2025-10-16
更新模型資訊
模型详情
類型
Checkpoint
发布時間
2024-11-28
基础模型
Flux.1 D
模型參數
clip_skip:1
训练参數
Epochs:1
Steps:0
许可范围
創作许可范围
在線生圖
進行融合
允许下载
商業许可范围
生成圖片可出售或用於商業目的
允许模型轉售或融合后出售
QR Code
下載SeaArt App
在移動端继續你的AI創作之旅