Details
Related
v1.3
V1.2
V1.1
V1m
V1
V1to1.3-inpainting
Hassaku (?????? model)
Original

Hassaku (?????? model)

2.0M
22.9K
129.8K
#holou
#Character
#model anime
#Base Model
#????
#Anime
#????
#Base Model
#??????
#??????

Hassaku aims to be a model with a bright, clear anime style. Model focus are ???? images, but also with a high emphasis for good looking sfw images as well. My Discord for everything related to anime models and art. You can support me on my patreon and if you are interested, we also have a collaboration of multiple AI artists named Sinful Sketch Society that you can support!

My models: sudachi(flat 2d), koji(2d), yuzu(light semirealistic), grapefruit(old ?????? model)

Supporters:

Thanks to my supporters Riyu, SETI, Jelly, Alessandro and Kodokuna on my patreon!

You can support me on my patreon, where you can get other models of me and early access to hassaku versions.

_____________________________________________________

Using the model:

Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.

My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.

Use a clip skip 1 or 2. Clip 2 is better for private parts, img2img and prompt following. Clip 1 is visually better, because i assume, the model has more time and freedom there. I use clip2.

Don't use face restore and underscores _, type red eyes and not red_eyes.

Don't go to really high resolutions. Every model, like hassaku, get lost in the vastness of big images and has a much higher chance to greate, as example, a second ????.

_____________________________________________________

Loras:

Every LoRA that is build to function on anyV3 or orangeMixes, works on hassaku too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.

_____________________________________________________

Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments

I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest).

View Translation

Rating & Review

4.3 /5
0 Ratings

Not enough ratings or reviews received yet

no-data
No data available
avatar
Ikena
2.1K
25.3K
Chat with the model
Notice
2024-10-08
Publish Model
2023-06-29
Update Model Info
Model Details
Type
Checkpoint
Publish Time
2023-06-29
Base Model
SD 1.5
Version Introduction

Hassaku aims to be a model with a bright, clear anime style. Model focus are ???? images, but also with a high emphasis for good looking sfw images as well. My Discord for everything related to anime models and art. You can support me on my patreon and if you are interested, we also have a collaboration of multiple AI artists named Sinful Sketch Society that you can support!

My models: sudachi(flat 2d), koji(2d), yuzu(light semirealistic), grapefruit(old ?????? model)

Supporters:

Thanks to my supporters Riyu, SETI, Jelly, Alessandro and Kodokuna on my patreon!

You can support me on my patreon, where you can get other models of me and early access to hassaku versions.

_____________________________________________________

Using the model:

Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.

My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.

Use a clip skip 1 or 2. Clip 2 is better for private parts, img2img and prompt following. Clip 1 is visually better, because i assume, the model has more time and freedom there. I use clip2.

Don't use face restore and underscores _, type red eyes and not red_eyes.

Don't go to really high resolutions. Every model, like hassaku, get lost in the vastness of big images and has a much higher chance to greate, as example, a second ????.

_____________________________________________________

Loras:

Every LoRA that is build to function on anyV3 or orangeMixes, works on hassaku too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.

_____________________________________________________

Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments

I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest).

License Scope
Creative License Scope
Online Image Generation
Merge
Allow Downloads
Commercial License Scope
Sale or Commercial Use of Generated Images
Resale of Models or Their Sale After Merging
QR Code
Download SeaArt App
Continue your AI creation journey on mobile devices