Подробности
Рекомендовано
Pusa Low Rank 16
Pusa High Rank 16
Pusa Low Rank 256
Pusa High Rank 256
Pusa

Pusa

0
8
3
#Стиль

Pusa is a finetune for Wan 2.2 in LoRA format. It improves video quality.

Requirements: Kijai WanWrapper node https://github.com/kijai/ComfyUI-WanVideoWrapper
Note: As of 9/3/2025, ComfyUI just updated Pusa nodes yesterday. Should be a little bit easier to use now.

Various tasks in one model, all support 4-step inference with LightX2V: Image-to-Video, Start-End Frames, Video Completion, Video Extension, Text-to-Video, Video Transition, and more..

What Wan 2.2 Pusa Does

  • Video Generation Tasks: It can perform text-to-video, image-to-video (turning an image into a moving scene), video extension, transitions, and even more complex tasks like mapping both a starting and ending frame to a video seamlessly.

  • Multi-Input, Multi-Task: Unlike older models that needed a separate model or training for each task, Wan 2.2 Pusa’s unique design (the “Pusa paradigm”) allows it to generalize to different video tasks with one unified system and minimal additional training.

  • Fast and Efficient: Thanks to its “LightX2V” acceleration and “vectorized timestep adaptation” (VTA), it achieves high-quality results in very few inference steps, drastically reducing the computing power and time needed for video generation.

How Does It Work?

  • Vectorized Timestep Adaptation (VTA): The core innovation is its use of many “timesteps” for each video frame, letting the model adapt motion and transitions independently across the video. This means each part of a video can evolve with customized motion, leading to better temporal smoothness and creative control without disturbing the original strengths of the base video model.

  • Mixture of Experts (MoE): Wan 2.2 uses two specialized sub-models: one for “high noise” (complex motion) and one for “low noise” (preserving static details), blending their strengths for better video quality.

  • LoRA Fine-Tuning: The Pusa model uses a lightweight, “surgical” adaptation technique (LoRA) to inject new capabilities (like image-to-video) without overwriting what the base model already does well, preserving text-to-video powers while adding new features efficiently.

Wan 2.2 Pusa
https://github.com/Yaofang-Liu/Pusa-VidGen

https://huggingface.co/RaphaelLiu/Pusa-Wan2.2-V1

https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa

These are rank 256 LoRAs - that is why the file size is so large. These Loras are supposed to enhance the video/image quality of Wan 2.2. In my own personal use, I frequently find the effect subtle, but always improves the video. I don't know if it was trained on , but it works fine with it.

According to the author's page, they are supposed to be used around 1.4-1.5 weight. You can use it with lightx2v for 4-8 steps, or normally with normal steps.

Workflow Example: https://civitai.com/models/1923086?modelVersionId=2176586

Просмотреть перевод

Оценки и отзывы

-- /5
0 оценок

Еще не получено достаточно оценок или комментариев

no-data
Нет данных
C
Cyph3r
29
444
Общение с моделью
Объявление
2025-09-02
Публикация модели
2025-09-02
Обновить информацию о модели
Детали модели
Тип
LORA
Время публикации
2025-09-02
Базовая модель
Wan Video 2.2 T2V-A14B
Область действия лицензии
Источник: civitai

1. Права на репостнутые модели принадлежат оригинальным создателям.

2. Для подтверждения авторства модели создателям следует обратиться к сотрудникам SeaArt AI через официальные каналы. Нажмите, чтобы подтвердить

Диапазон лицензий на создание
Прямая трансляция
Смешение
Разрешить загрузку
Коммерческая лицензия
Сгенерированные изображения можно продавать или использовать в коммерческих целях
Разрешить перепродажу моделей или продажу после интеграции
QR Code
Скачать приложение SeaArt
Продолжайте путешествие по AI-творчеству на мобильном устройстве