Much better overall
Trained with mew model (AnythingV5)
Model now comes with both CKPT and SAFETENSORS files





for sampling methods, use Euler a (best), DDIM (second best), or DPM++ DM Karras
Step 1: download SAFETENSORS and VAE files.
Step 2: put the SAFETENSORS file under "stable-diffusion-webui\models\Stable-diffusion"
Step 3: put the VAE file under "stable-diffusion-webui\models\VAE"
Step 4: Done! enjoy the model
-use a minimal negative prompt for best results
-use Euler A and 20-25 steps for best results
-use danbooru tags
-I used a clip skip of 2 (optional)
-I also used the upscaler Latent (nearest-exact) with a highres step of 20 and denoise of 0.5 to improve image quality and detail (optional)
DO NOT USE A DENOISE STRENGTH BELOW 4.5 FOR BEST RESULTS
EXAMPLE PROMPT
((Masterpiece)), (best quality), (1girl), red hair, beautiful red eyes, , classroom, black glasses, school uniform
for the VAE I used the same one as grapefruit
I will try to improve and update this model by adding other images, although I'm not too familiar with SD and training models, so I will most likely stick to only merging models.
_______
-Loras have not been tested yet, but they should most likely work
-use the upscaler Latent (nearest-exact) for best results
- I will try to update the model at least once per week
-Image generation on the A1111 WebUI normally took around ~10 seconds using a 3080 TI with 12 gigabytes of RAM
Much better overall
Trained with mew model (AnythingV5)
Model now comes with both CKPT and SAFETENSORS files
