创建工作流
基于 SeaArt ComfyUI 构建
creation

猜你喜欢

精选工作流

LTX2.3-Audio-video generation

LTX-2.3 is an open-source audio-video foundation model released by Lightricks. Its core feature is not simply generating video alone or producing video first and adding audio later. Instead, it places both video and audio within a single generation framework, directly producing synchronized visuals and sound. Officially, it is described as a DiT-based audio-video foundation model, meaning a joint audio-video generation model built on Diffusion Transformer architecture.Compared with many traditional video generation approaches, the biggest difference of LTX-2.3 is its native audio-visual synchronization. If a prompt includes speaking, singing, ambient sound, or rhythmic motion, the model attempts to align lip movements, actions, and sound within a single generation process, rather than relying on post-processing to dub audio or correct lip sync afterward. This makes it especially valuable for dialogue videos, character singing, and short narrative scenes.

LTX2.3-Audio-video generation

4.3

LTX-2.3 is an open-source audio-video foundation model released by Lightricks. Its core feature is not simply generating video alone or producing video first and adding audio later. Instead, it places both video and audio within a single generation framework, directly producing synchronized visuals and sound. Officially, it is described as a DiT-based audio-video foundation model, meaning a joint audio-video generation model built on Diffusion Transformer architecture.Compared with many traditional video generation approaches, the biggest difference of LTX-2.3 is its native audio-visual synchronization. If a prompt includes speaking, singing, ambient sound, or rhythmic motion, the model attempts to align lip movements, actions, and sound within a single generation process, rather than relying on post-processing to dub audio or correct lip sync afterward. This makes it especially valuable for dialogue videos, character singing, and short narrative scenes.
avatar
avatar_frame
SeaArt Comfy Helper
Happy Horse

Happy Horse 1.0 is an open-source AI video generation model released in April 2026. Upon its launch, it topped the Artificial Analysis video generation leaderboard, becoming the most powerful AI video generator available today.It features 15 billion parameters with a unified Transformer architecture using 40-layer self-attention. Its standout capability is generating both video and audio simultaneously in a single pass, achieving perfect synchronization between visuals and sound. It supports lip-sync in 7 languages: English, Mandarin, Cantonese, Japanese, Korean, German, and French, making it incredibly useful for digital avatars, voiceover videos, and similar applications.Happy Horse 1.0 outputs 1080p HD quality with clips lasting 5 to 8 seconds per generation. Thanks to its 8-step DMD-2 distillation acceleration technology, generation takes approximately 10 to 38 seconds, making it quite efficient. It uses a unified architecture to process text, image, video, and audio tokens together, rather than relying on traditional multi-module combinations. This design ensures more consistent and harmonious output quality.

Happy Horse

--

Happy Horse 1.0 is an open-source AI video generation model released in April 2026. Upon its launch, it topped the Artificial Analysis video generation leaderboard, becoming the most powerful AI video generator available today.It features 15 billion parameters with a unified Transformer architecture using 40-layer self-attention. Its standout capability is generating both video and audio simultaneously in a single pass, achieving perfect synchronization between visuals and sound. It supports lip-sync in 7 languages: English, Mandarin, Cantonese, Japanese, Korean, German, and French, making it incredibly useful for digital avatars, voiceover videos, and similar applications.Happy Horse 1.0 outputs 1080p HD quality with clips lasting 5 to 8 seconds per generation. Thanks to its 8-step DMD-2 distillation acceleration technology, generation takes approximately 10 to 38 seconds, making it quite efficient. It uses a unified architecture to process text, image, video, and audio tokens together, rather than relying on traditional multi-module combinations. This design ensures more consistent and harmonious output quality.
avatar
avatar_frame
SeaArt Comfy Helper
ERNIE-Image-Turbo

Model OverviewERNIE-Image is an open-source text-to-image generation model developed by Baidu's Wenxin (ERNIE) team. Built on a single-stream Diffusion Transformer (DiT) architecture with 8 billion parameters, it operates within a Latent Diffusion Model (LDM) framework.The model's core philosophy emphasizes not only visual aesthetics but also controllability. In content creation scenarios such as commercial posters, comics, and multi-panel layouts, accurate content realization matters just as much as visual appeal. Core CapabilitiesNative Multilingual SupportNatively understands Chinese, English, and Japanese, supporting culturally authentic outputs and idiomatic expressionsParticularly well-suited for East Asian content creationPrecise Text RenderingStrongest text rendering among all open-source modelsSupports dense typography, long-form text, and layout-sensitive content in both Chinese and EnglishIdeal for text-heavy imagery such as poster titles, comic dialogue boxes, and UI interfacesComplex Instruction FollowingReliably handles multi-object relationships, complex descriptions, and knowledge-intensive content

ERNIE-Image-Turbo

--

Model OverviewERNIE-Image is an open-source text-to-image generation model developed by Baidu's Wenxin (ERNIE) team. Built on a single-stream Diffusion Transformer (DiT) architecture with 8 billion parameters, it operates within a Latent Diffusion Model (LDM) framework.The model's core philosophy emphasizes not only visual aesthetics but also controllability. In content creation scenarios such as commercial posters, comics, and multi-panel layouts, accurate content realization matters just as much as visual appeal. Core CapabilitiesNative Multilingual SupportNatively understands Chinese, English, and Japanese, supporting culturally authentic outputs and idiomatic expressionsParticularly well-suited for East Asian content creationPrecise Text RenderingStrongest text rendering among all open-source modelsSupports dense typography, long-form text, and layout-sensitive content in both Chinese and EnglishIdeal for text-heavy imagery such as poster titles, comic dialogue boxes, and UI interfacesComplex Instruction FollowingReliably handles multi-object relationships, complex descriptions, and knowledge-intensive content
avatar
avatar_frame
SeaArt Comfy Helper
Flux.2 Pro&Flex

This workflow is providing access to two distinct versions: FLUX.2 Pro and FLUX.2 Flex. You can switch between them based on your specific needs for image precision and cost efficiency.🧩 Versions & Capabilities1. FLUX.2 ProCapabilities: Capable of generating high-quality images. Ideal for most standard creative tasks, style exploration, and rapid generation.Pricing (Credits):Text Only: 55 (≤1024px) / 70 (>1024px)Image Input: 80 (≤1024px) / 100 (>1024px)2. FLUX.2 FlexCapabilities: Compared to Pro, Flex excels in handling complex lighting, intricate textures, and adherence to long, complex prompts. It is the premier choice for ultimate image quality, commercial poster output, and high-precision editing tasks.Pricing (Credits):Text Only: 110 (≤1024px) / 140 (>1024px)Image Input: 220 (≤1024px) / 260 (>1024px)

Flux.2 Pro&Flex

4.9

This workflow is providing access to two distinct versions: FLUX.2 Pro and FLUX.2 Flex. You can switch between them based on your specific needs for image precision and cost efficiency.🧩 Versions & Capabilities1. FLUX.2 ProCapabilities: Capable of generating high-quality images. Ideal for most standard creative tasks, style exploration, and rapid generation.Pricing (Credits):Text Only: 55 (≤1024px) / 70 (>1024px)Image Input: 80 (≤1024px) / 100 (>1024px)2. FLUX.2 FlexCapabilities: Compared to Pro, Flex excels in handling complex lighting, intricate textures, and adherence to long, complex prompts. It is the premier choice for ultimate image quality, commercial poster output, and high-precision editing tasks.Pricing (Credits):Text Only: 110 (≤1024px) / 140 (>1024px)Image Input: 220 (≤1024px) / 260 (>1024px)
avatar
avatar_frame
SeaArt Comfy Helper

Wan Video

Wan2.2 VACE - Multimodal control-KJ

Continue the “unified editing/control” paradigm on the 2.2 backbone. The 2.2 backbone adopts a Mixture‑of‑Experts (MoE) design—high‑noise and low‑noise experts operating at different denoising stages—to improve quality and detail while keeping inference costs manageable. A representative controllable variant is Wan2.2‑VACE‑Fun‑A14B, which supports multi‑modal control conditions (Canny, Depth, OpenPose, MLSD, Trajectory, etc.). A typical workflow is: provide a reference image (to preserve identity/appearance) plus a driving video or its parsed control signals (e.g., pose sequence, trajectory, time‑varying depth/edges) to generate a video driven by that reference image. The VACE/Fun family provides these temporal control interfaces and the unified task support.

Wan2.2 VACE - Multimodal control-KJ

4.8

Continue the “unified editing/control” paradigm on the 2.2 backbone. The 2.2 backbone adopts a Mixture‑of‑Experts (MoE) design—high‑noise and low‑noise experts operating at different denoising stages—to improve quality and detail while keeping inference costs manageable. A representative controllable variant is Wan2.2‑VACE‑Fun‑A14B, which supports multi‑modal control conditions (Canny, Depth, OpenPose, MLSD, Trajectory, etc.). A typical workflow is: provide a reference image (to preserve identity/appearance) plus a driving video or its parsed control signals (e.g., pose sequence, trajectory, time‑varying depth/edges) to generate a video driven by that reference image. The VACE/Fun family provides these temporal control interfaces and the unified task support.
avatar
avatar_frame
SeaArt Comfy Helper
Wan2.2‑Fun-Inp-KJ

Wan2.2‑Fun‑InP is part of the Wan2.2‑Fun series. It supports conditioning on a start frame and an end frame to estimate the in‑between transition and produce temporally consistent video results for controllable image‑to‑video applications.What it addresses:Traditional image‑to‑video workflows typically extend motion from a single starting image. By adding an optional end keyframe, Fun‑InP helps the motion, composition, and overall content progress toward a specified target, making transitions easier to control and the sequence more coherent.Inputs: start‑frame image, end‑frame image (optional text prompt / control signals).Output: a video clip made up of interpolated middle frames, with the first and last frames visually consistent with the provided keyframes.

Wan2.2‑Fun-Inp-KJ

4.5

Wan2.2‑Fun‑InP is part of the Wan2.2‑Fun series. It supports conditioning on a start frame and an end frame to estimate the in‑between transition and produce temporally consistent video results for controllable image‑to‑video applications.What it addresses:Traditional image‑to‑video workflows typically extend motion from a single starting image. By adding an optional end keyframe, Fun‑InP helps the motion, composition, and overall content progress toward a specified target, making transitions easier to control and the sequence more coherent.Inputs: start‑frame image, end‑frame image (optional text prompt / control signals).Output: a video clip made up of interpolated middle frames, with the first and last frames visually consistent with the provided keyframes.
avatar
avatar_frame
SeaArt Comfy Helper
Wan2.1 Minimax-Remover - Video erase -KJ

Core Focus: Video-level object removal. Given a sequence of video frames and a corresponding mask, it seamlessly removes the masked object and fills in the background while maintaining temporal consistency, minimizing artifacts or remnants.Method Highlights:Minimum-Maximum Optimization: Tames bad noise during training and inference, improving the model's robustness to masked regions and reducing the probability of object regeneration.Two-Stage Architecture: First, a simplified DiT (Diffusion Transformer) structure is used to learn the removal capability; then, a version with fewer sampling steps and faster inference is obtained through "CFG de-distillation."Efficiency Features: Extremely low inference steps (approximately 6 steps in the official example), and does not rely on CFG, resulting in high speed and low resource consumption, suitable for long videos/batch processing. References

Wan2.1 Minimax-Remover - Video erase -KJ

3.0

Core Focus: Video-level object removal. Given a sequence of video frames and a corresponding mask, it seamlessly removes the masked object and fills in the background while maintaining temporal consistency, minimizing artifacts or remnants.Method Highlights:Minimum-Maximum Optimization: Tames bad noise during training and inference, improving the model's robustness to masked regions and reducing the probability of object regeneration.Two-Stage Architecture: First, a simplified DiT (Diffusion Transformer) structure is used to learn the removal capability; then, a version with fewer sampling steps and faster inference is obtained through "CFG de-distillation."Efficiency Features: Extremely low inference steps (approximately 6 steps in the official example), and does not rely on CFG, resulting in high speed and low resource consumption, suitable for long videos/batch processing. References
avatar
avatar_frame
SeaArt Comfy Helper
LongCat-Video extension

🐱 LongCat-Video: Infinite Video Extension Workflow【One-Sentence Intro】Break the duration limit of AI video generation 🚀What Can It Do?This is an advanced workflow based on the **Wan2.1** model, designed to solve the core pain points of AI videos being "too short" and "disjointed when extended."♾️ Infinite Extension Just provide an image or a short video clip, and the workflow will automatically generate subsequent frames like a "relay race," theoretically allowing for infinite generation.Seamless "Invisible" Stitching It automatically trims the awkward beginnings of extended segments, making the transition between clips as smooth as silk, with absolutely no visible stitching marks.【Use Cases】Creating ultra-long looping landscape videos.Producing coherent narrative shorts, no longer limited by the 5-second barrier.

LongCat-Video extension

4.4

🐱 LongCat-Video: Infinite Video Extension Workflow【One-Sentence Intro】Break the duration limit of AI video generation 🚀What Can It Do?This is an advanced workflow based on the **Wan2.1** model, designed to solve the core pain points of AI videos being "too short" and "disjointed when extended."♾️ Infinite Extension Just provide an image or a short video clip, and the workflow will automatically generate subsequent frames like a "relay race," theoretically allowing for infinite generation.Seamless "Invisible" Stitching It automatically trims the awkward beginnings of extended segments, making the transition between clips as smooth as silk, with absolutely no visible stitching marks.【Use Cases】Creating ultra-long looping landscape videos.Producing coherent narrative shorts, no longer limited by the 5-second barrier.
avatar
avatar_frame
SeaArt Comfy Helper

新精选

挑战活动
基础
视频生成
音频生成
3D生成
FLUX
风格
设计
摄影
图片处理
创意玩法
节点筛选器
筛选

欢迎使用海艺AI工作流

通过海艺专为满足艺术家、设计师和创意人员的多样化需求而设计的AI艺术生成器工作流,简化你的创作过程。从AI图像到AI视频海艺AI提供了实现艺术愿景所需的一切。

为什么要选择海艺AI的ComfyUI工作流?

简单的界面

海艺AI提供的直观界面,让你可以轻松配置工作流。即使没有编程经验,你也能驾驭所有工作流。

可自定义的工作流

按你的方式设计工作流。从高级LoRA训练到复杂精细的文生图,每个步骤都可以根据你的需求进行调整。

高效率

海艺优化了AI艺术创作过程,加快渲染时间,减少技术障碍,助你快速生成令人惊叹的视觉效果。

海艺AI上的多个工作流

成千上万的AI艺术创作工作流

通过海艺工作流扩展你的艺术视野。访问成千上万个预设工作流,轻松生成各种格式的AI艺术,如文生图、图生图和图生视频。这些工作流集成了Flux、SD 3.5等强大的AI模型,以及包括条件生图在内的其他流行选项,给予你极大的灵活性去创作符合个人喜好的惊艳视觉效果。

海艺AI上的可自定义工作流

通过可自定义的工作流掌控一切

通过海艺工作流,你可以完全控制生成过程。我们提供强大的自定义选项,允许你根据特定需求调整工作流,例如调整参数,修改AI模型,以及微调设置,以确保最终效果符合你的创作愿景。

常见问题解答

collapse

什么是ComfyUI工作流?

海艺AI的工作流是一种超越简单文本提示词的创新工具。与传统的AI艺术生成器不同,海艺提供了一个可视化的工作流系统,你可以在其中打造自定义的工作流,精确控制图像和视频生成过程。

expand

我能用工作流生成哪些类型的AI艺术?

我们的工作流允许你轻松生成各种类型的AI艺术,包括写实人像、奇幻风景、动漫角色和抽象创作。你可以轻松实现文生图、图生图和图生视频,应用风格变化,甚至还能生成3D模型。

expand

新手可以使用ComfyUI工作流吗?

可以!借助我们易于使用的拖放界面和实时预览,海艺的工作流既适合新手也适合高级用户,让AI艺术创作变得简单无比。

expand

我可以自定义我的工作流吗?

可以,海艺AI提供了多种可自定义的设置,允许你根据具体项目需求调整工作流。