The fp16 model, original from huggingface https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP/blob/main/diffusion_pytorch_model.safetensors. And a converted fp8e4m3fn version (make sure you download the one you want.)
This is a reupload of https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP including a fp8 conversion for people who can't run the 1.3b model in 16 bit precision.
Wan 2.1-Fun-1.3B-InP is an img2vid wan model at 1.3 billion parameters, it was trained by Alibaba-PAI. Initialized from the 1.3b t2v model. The weights are similar to the 14b i2v models, but with the size of the 1.3b model. Making it an easy to run, but still good quality i2v model. It was trained for start and end frame inpainting. Setting just a start frame allows it to do i2v. Wan 14b workflows
If you want to use diffusion-pipe for lora training, you can use my fork. Make sure you're on the patch-1 branch. There's also an open pull request for it to be merged into the main repository.
git clone --recurse-submodules https://github.com/gitmylo/diffusion-pipe -b patch-1
The pr has been merged, so regular diffusion pipe can be used now:
git clone --recurse-submodules https://github.com/tdrussell/diffusion-pipe
The fp16 model, original from huggingface https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP/blob/main/diffusion_pytorch_model.safetensors. And a converted fp8e4m3fn version (make sure you download the one you want.)
1. 轉载模型僅供學習與交流分享,其版權及最終解释權归原作者。
2. 模型原作者如需認領模型,請通過官方渠道联系海藝AI工作人員進行認證。我們致力於保護每一位創作者的權益。 點擊去認領
