التفاصيل
يوصى
v1.3
V1.2
V1.1
V1m
V1
V1to1.3-inpainting
Hassaku (?????? model)
أصلي

Hassaku (?????? model)

2.0M
22.9K
129.8K
#عارية
#شخصية
#أنمي
#النموذج الأساسي
#????
#أنمي
#????
#النموذج الأساسي
#عري
#??????

Hassaku aims to be a model with a bright, clear anime style. Model focus are ???? images, but also with a high emphasis for good looking sfw images as well. My Discord for everything related to anime models and art. You can support me on my patreon and if you are interested, we also have a collaboration of multiple AI artists named Sinful Sketch Society that you can support!

My models: sudachi(flat 2d), koji(2d), yuzu(light semirealistic), grapefruit(old ?????? model)

Supporters:

Thanks to my supporters Riyu, SETI, Jelly, Alessandro and Kodokuna on my patreon!

You can support me on my patreon, where you can get other models of me and early access to hassaku versions.

_____________________________________________________

Using the model:

Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.

My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.

Use a clip skip 1 or 2. Clip 2 is better for private parts, img2img and prompt following. Clip 1 is visually better, because i assume, the model has more time and freedom there. I use clip2.

Don't use face restore and underscores _, type red eyes and not red_eyes.

Don't go to really high resolutions. Every model, like hassaku, get lost in the vastness of big images and has a much higher chance to greate, as example, a second ????.

_____________________________________________________

Loras:

Every LoRA that is build to function on anyV3 or orangeMixes, works on hassaku too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.

_____________________________________________________

Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments

I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest).

مشاهدة الترجمة

التقييمات والتعليقات

4.3 /5
0 من التقييمات

لم يتم استلام تقييمات أو تعليقات كافية بعد

no-data
لا توجد بيانات حاليا
avatar
Ikena
2.1K
25.3K
دردشة مع النموذج
إعلان
2024-10-08
نشر النماذج
2023-06-29
تحديث معلومات النموذج
تفاصيل النموذج
النوع
Checkpoint
وقت النشر
2023-06-29
النموذج الأساسي
SD 1.5
مقدمة الإصدارة

Hassaku aims to be a model with a bright, clear anime style. Model focus are ???? images, but also with a high emphasis for good looking sfw images as well. My Discord for everything related to anime models and art. You can support me on my patreon and if you are interested, we also have a collaboration of multiple AI artists named Sinful Sketch Society that you can support!

My models: sudachi(flat 2d), koji(2d), yuzu(light semirealistic), grapefruit(old ?????? model)

Supporters:

Thanks to my supporters Riyu, SETI, Jelly, Alessandro and Kodokuna on my patreon!

You can support me on my patreon, where you can get other models of me and early access to hassaku versions.

_____________________________________________________

Using the model:

Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.

My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.

Use a clip skip 1 or 2. Clip 2 is better for private parts, img2img and prompt following. Clip 1 is visually better, because i assume, the model has more time and freedom there. I use clip2.

Don't use face restore and underscores _, type red eyes and not red_eyes.

Don't go to really high resolutions. Every model, like hassaku, get lost in the vastness of big images and has a much higher chance to greate, as example, a second ????.

_____________________________________________________

Loras:

Every LoRA that is build to function on anyV3 or orangeMixes, works on hassaku too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.

_____________________________________________________

Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments

I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest).

نطاق الترخيص
نطاق الترخيص الإبداعي
توليد الصورة عبر الإنترنت
دمج
السماح بالتنزيل
نطاق الترخيص التجاري
بيع أو الاستخدام التجاري للصور المولدة
إعادة بيع النماذج أو بيعها بعد الدمج
QR Code
تنزيل تطبيق SeaArt
واصل رحلة إبداعك بالـAI على الهاتف المحمول