This is Style slice We happy few B-LORA










B-LORA sliced twice, Content (composition) and Style. I was bored AF and cooked some ComfyUI workflow, which has proven to be extremely efficient and flexible. I think there is no better way for style transfer. Separate Lora load for composition and style, samplers latents decoding vae and straight into IPAdapter Style & Composition SDXL, than sampling and (only if you megalomaniac like me) applying third Lora, which is trained separately with Dreambooth method (trigger + class). I will share ComfyUI workflow, make sure you know how to rename clip vision and ip adapter models, cause i use Unified loader for IP adapter.
Ofcourse, each model can be used separately and independently.
All captions used for training + comfyUI workflows here:
https://drive.google.com/drive/folders/1FFS4CnX3RI4B1yhzwqrlEwLd_QGpuYCZ?usp=sharing
This is Style slice We happy few B-LORA
1. 재게시된 모델의 권리는 원 제작자에게 있습니다.
2. 모델 원작자가 모델을 인증받으려면 공식 채널을 통해 SeaArt.AI 직원에게 문의하세요. 저희는 모든 창작자의 권리를 보호하기 위해 노력합니다. 인증하러 이동
