Dettagli
Raccomandazioni
v1.3
V1.2
V1.1
V1m
V1
V1to1.3-inpainting
Hassaku (?????? model)
Originale

Hassaku (?????? model)

2.0M
22.9K
129.8K
#เปลือยเปล่า
#Character
#model anime
#Base Model
#????
#อนิเมะ
#????
#Base Model
#ความเปลือยเปล่า
#??????

Hassaku aims to be a model with a bright, clear anime style. Model focus are ???? images, but also with a high emphasis for good looking sfw images as well. My Discord for everything related to anime models and art. You can support me on my patreon and if you are interested, we also have a collaboration of multiple AI artists named Sinful Sketch Society that you can support!

My models: sudachi(flat 2d), koji(2d), yuzu(light semirealistic), grapefruit(old ?????? model)

Supporters:

Thanks to my supporters Riyu, SETI, Jelly, Alessandro and Kodokuna on my patreon!

You can support me on my patreon, where you can get other models of me and early access to hassaku versions.

_____________________________________________________

Using the model:

Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.

My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.

Use a clip skip 1 or 2. Clip 2 is better for private parts, img2img and prompt following. Clip 1 is visually better, because i assume, the model has more time and freedom there. I use clip2.

Don't use face restore and underscores _, type red eyes and not red_eyes.

Don't go to really high resolutions. Every model, like hassaku, get lost in the vastness of big images and has a much higher chance to greate, as example, a second ????.

_____________________________________________________

Loras:

Every LoRA that is build to function on anyV3 or orangeMixes, works on hassaku too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.

_____________________________________________________

Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments

I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest).

Visualizza la traduzione

Valutazioni e recensioni

4.3 /5
0 valutazioni

Non ancora ricevute valutazioni o commenti sufficienti

no-data
Nessun dato disponibile
avatar
Ikena
2.1K
25.3K
Conversazione con il modello
Annuncio
2024-10-08
Pubblicare modello
2023-06-29
Aggiorna le informazioni del modello
Dettagli modello
Tipo
Checkpoint
Data di pubblicazione
2023-06-29
Modello Base
SD 1.5
Introduzione alla versione

Hassaku aims to be a model with a bright, clear anime style. Model focus are ???? images, but also with a high emphasis for good looking sfw images as well. My Discord for everything related to anime models and art. You can support me on my patreon and if you are interested, we also have a collaboration of multiple AI artists named Sinful Sketch Society that you can support!

My models: sudachi(flat 2d), koji(2d), yuzu(light semirealistic), grapefruit(old ?????? model)

Supporters:

Thanks to my supporters Riyu, SETI, Jelly, Alessandro and Kodokuna on my patreon!

You can support me on my patreon, where you can get other models of me and early access to hassaku versions.

_____________________________________________________

Using the model:

Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.

My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.

Use a clip skip 1 or 2. Clip 2 is better for private parts, img2img and prompt following. Clip 1 is visually better, because i assume, the model has more time and freedom there. I use clip2.

Don't use face restore and underscores _, type red eyes and not red_eyes.

Don't go to really high resolutions. Every model, like hassaku, get lost in the vastness of big images and has a much higher chance to greate, as example, a second ????.

_____________________________________________________

Loras:

Every LoRA that is build to function on anyV3 or orangeMixes, works on hassaku too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.

_____________________________________________________

Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments

I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest).

Ambito di Licenza
Ambito di Licenza di Creazione
Immagini Online
Effettua una Fusione
Consenti Download
Licenza Commerciale
Le immagini generate possono essere vendute o utilizzate per scopi commerciali
Consenti la rivendita del modello o la vendita dopo la fusione
QR Code
Scarica l'app SeaArt
Continua il tuo viaggio di creazione AI su mobile