Original Requirements:
40GB VRAM
80GB System RAM
Now Accessible With:
As low as 8GB VRAM
32GB System RAM
These are memory-optimized GGUF quantizations of the original Flux.1-Heavy-17B model (by city96), making it accessible for systems with lower VRAM requirements. The original model is a 17B parameter self-merge of the 12B Flux.1-dev model, notable for being one of the first open-source 17B image models capable of generating coherent images.
VRAM Requirement: 16GB
Best balance of quality and performance
Recommended for users with RTX 3080/3090 or similar GPUs
VRAM Requirement: 12GB
Good quality with reduced memory footprint
Ideal for RTX 3060 Ti/3070/2080 Ti users
VRAM Requirement: 8GB
Most memory-efficient version
Enables running on mid-range GPUs like RTX 3060/2060 Super
Maintains the core capabilities of the original Flux.1-Heavy-17B model
Optimized for different VRAM configurations
Enables broader hardware compatibility without requiring high-end GPUs
Smooth operation at specified VRAM levels
Dramatically reduced resource requirements compared to original model
Download the preferred quantization version
Place the GGUF file in your models directory
Update your configuration to point to the new model file
Original model: city96 (Flux.1-Heavy-17B)
Base architecture: Flux.1-dev (12B parameter model)
Performance may vary depending on your specific hardware configuration
Choose the quantization level based on your available VRAM and quality requirements
Lower quantization levels may show slight quality degradation compared to the original model
1. The rights to reposted models belong to original creators.
2. Original creators should contact SeaArt.AI staff through official channels to claim their models. We are committed to protecting every creator's rights. Click to Claim
