So I figured out an interesting development. Trained without text encoder, and it got BETTER results. Maybe Kohya is training dual text encodes wrong? who knows.
The SDXL 3.0 one is just a test, and is weird with how it seems to work. Sometimes it kicks in, sometimes it doesnt. If you image isnt getting much 'style' then make it more furry focused and/or kick up the strenghth_ values in comfyui. All my images used 1.25-2.0 for both this is a very first test lora that actually worked for me, so its not the greatest
Trained this a few days ago on Plumlucky art from twitter, V1 Using Zack3D as a base and V2 using NAI base. Works well on merges including/with Zack3D/NAI as well. Trained with many tags using the E621 Convnext tagger (Also by Zack).
V3 Furryrock added, found more images on the Logan Preshaw chinese artstation account, which seems to me plumlucky's main and apparently is kinda popular based on a quick google.
Big thanks to the people who make LoRa guides, especially the one in the Furry Diffusion discord server for helping me understand this mess. Also thanks to Plumlucky for tweeting this out, gave me a boost <>
So I figured out an interesting development. Trained without text encoder, and it got BETTER results. Maybe Kohya is training dual text encodes wrong? who knows.
1. 转载模型仅供学习与交流分享,其版权及最终解释权归原作者。
2. 模型原作者如需认领模型,请通过官方渠道联系海艺AI工作人员进行认证。我们致力于保护每一位创作者的权益。 点击去认领
