详情
推薦
FP8 Version
FP16 Version
Dwayne Johnson aka The Rock FLUX Dev Fine-Tuning / DreamBooth Model for Educational and Research Purposes - Dwayne Johnson aka The Rock FLUX Dev LoRA Model for Educational and Research Purposes - Full Tutorial

Dwayne Johnson aka The Rock FLUX Dev Fine-Tuning / DreamBooth Model for Educational and Research Purposes - Dwayne Johnson aka The Rock FLUX Dev LoRA Model for Educational and Research Purposes - Full Tutorial

200
15
82
#名人
#FLUX

I am sharing how I trained this model with full details and even the dataset: please read entire post very carefully.

This model is purely trained for educational and research purposes only for SFW and ethical image generation.

The workflow and the config used in this tutorial can be used to train clothing, items, animals, pets, objects, styles, simply anything.

The uploaded images have SwarmUI metadata and can be re-generated exactly. For generations FP16 model used but FP8 should yield almost same quality. Don't forget to have used yolo face masking model in prompts.

How To Use

Download model into diffusion_models of the SwarmUI. Then you need to use Clip-L and T5-XXL models as well. I recommend T5-XXL FP16 or Scaled FP8 version.

A newest fully public tutorial here for how to use :

I have trained both FLUX LoRA and Fine-Tuning / DreamBooth model.

Activation token / trigger word : ohwx man

Each training was up to 200 epochs and once every 10 epoch checkpoints saved and shared on below Hugging Face Repo : https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline

This model contains experimental results comparing Fine-Tuning / DreamBooth and LoRA training approaches.

Additional Resources

Environment Setup

  • Kohya GUI Version: 021c6f5ae3055320a56967284e759620c349aa56

  • Torch: 2.5.1

  • xFormers: 0.0.28.post3

Dataset Information

  • Resolution: 1024x1024

  • Dataset Size: 28 images

  • Captions: "ohwx man" (nothing else)

  • Activation Token/Trigger Word: "ohwx man"

Fine-Tuning / DreamBooth Experiment

Configuration

  • Config File: 48GB_GPU_28200MB_6.4_second_it_Tier_1.json

  • Training: Up to 200 epochs with consistent config

  • Optimal Result: Epoch 170 (subjective assessment)

Results

LoRA Experiment

Configuration

  • Config File: Rank_1_29500MB_8_85_Second_IT.json

  • Training: Up to 200 epochs

  • Optimal Result: Epoch 160 (subjective assessment)

Results

Comparison Results

Key Observations

  • LoRA demonstrates excellent realism but shows more obvious overfitting when generating stylized images.

  • Fine-Tuning / DreamBooth is better than LoRA as expected.

Model Naming Convention

Fine-Tuning Models

  • Dwayne_Johnson_FLUX_Fine_Tuning-000010.safetensors

    • 10 epochs

    • 280 steps (28 images × 10 epochs)

    • Batch size: 1

    • Resolution: 1024x1024

  • Dwayne_Johnson_FLUX_Fine_Tuning-000020.safetensors

    • 20 epochs

    • 560 steps (28 images × 20 epochs)

    • Batch size: 1

    • Resolution: 1024x1024

LoRA Models

  • Dwayne_Johnson_FLUX_LoRA-000010.safetensors

    • 10 epochs

    • 280 steps (28 images × 10 epochs)

    • Batch size: 1

    • Resolution: 1024x1024

  • Dwayne_Johnson_FLUX_LoRA-000020.safetensors

    • 20 epochs

    • 560 steps (28 images × 20 epochs)

    • Batch size: 1

    • Resolution: 1024x1024

查看译文

評分與評論

-- /5
0 個評分

尚未收到足夠的評分或評論

no-data
暫無數據
與模型對話
公告
2024-11-02
发布模型
2024-11-05
更新模型資訊
模型详情
類型
Checkpoint
发布時間
2024-11-02
基础模型
Flux.1 D
触发词
ohwx man
復制
版本介绍

For Full Details, Training Dataset, Tutorial, Guide, Configs, Training Json Files, Workflows, Installers, Resources and All Checkpoints > https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline

This is FP8 converted version of original FP16 training

许可范围
來源: civitai

1. 轉载模型僅供學習與交流分享,其版權及最終解释權归原作者。

2. 模型原作者如需認領模型,請通過官方渠道联系海藝AI工作人員進行認證。我們致力於保護每一位創作者的權益。 點擊去認領

創作许可范围
在線生圖
進行融合
允许下载
商業许可范围
生成圖片可出售或用於商業目的
允许模型轉售或融合后出售
QR Code
下載SeaArt App
在移動端继續你的AI創作之旅