The example images are from the official WanAI.
We know the Lightx2v_Lora is extracted from the Lightx2v checkpoint that was released. So why not just use the Lightx2v checkpoint directly?
Usually checkpoints give better results than LoRAs.
We can skip the extra step of loading a LoRA, so it's faster and, in my opinion, the quality is better. Unfortunately, the originally released checkpoint is too bulky, and I couldn't find a related quantized model.
So, I decided to handle it myself, choosing the Q4_K_M quantization level because it's more balanced, faster, and still guarantees good generation quality.
Lightx2v (Checkpoint):
https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v
https://huggingface.co/lightx2v/Wan2.1-I2V-14B-720P-StepDistill-CfgDistill-Lightx2v
1.転載モデルは学習・共有目的のみで使用し、著作権は原作者に帰属します
2.モデルの認証は公式チャンネルでご連絡ください。クリエイターの権利保護に努めています クリックして認証
