상세 정보
추천
v1.3
V1.2
V1.1
V1m
V1
V1to1.3-inpainting
Hassaku (?????? model)
원작

Hassaku (?????? model)

2.0M
22.9K
129.8K
#다만
#캐릭터
#모델 애니메이션
#기본 모델
#????
#애니메이션
#????
#기본 모델
#나체
#??????

Hassaku aims to be a model with a bright, clear anime style. Model focus are ???? images, but also with a high emphasis for good looking sfw images as well. My Discord for everything related to anime models and art. You can support me on my patreon and if you are interested, we also have a collaboration of multiple AI artists named Sinful Sketch Society that you can support!

My models: sudachi(flat 2d), koji(2d), yuzu(light semirealistic), grapefruit(old ?????? model)

Supporters:

Thanks to my supporters Riyu, SETI, Jelly, Alessandro and Kodokuna on my patreon!

You can support me on my patreon, where you can get other models of me and early access to hassaku versions.

_____________________________________________________

Using the model:

Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.

My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.

Use a clip skip 1 or 2. Clip 2 is better for private parts, img2img and prompt following. Clip 1 is visually better, because i assume, the model has more time and freedom there. I use clip2.

Don't use face restore and underscores _, type red eyes and not red_eyes.

Don't go to really high resolutions. Every model, like hassaku, get lost in the vastness of big images and has a much higher chance to greate, as example, a second ????.

_____________________________________________________

Loras:

Every LoRA that is build to function on anyV3 or orangeMixes, works on hassaku too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.

_____________________________________________________

Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments

I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest).

번역문 보기

평점 및 리뷰

4.3 /5
0 개의 평점

충분한 평가나 댓글을 받지 못했습니다.

no-data
데이터 없음
avatar
Ikena
2.1K
25.3K
모델과 대화하기
공고
2024-10-08
모델 게시
2023-06-29
모델 정보 업데이트
모델 상세정보
유형
Checkpoint
게시 날짜
2023-06-29
기본 모델
SD 1.5
버전 소개

Hassaku aims to be a model with a bright, clear anime style. Model focus are ???? images, but also with a high emphasis for good looking sfw images as well. My Discord for everything related to anime models and art. You can support me on my patreon and if you are interested, we also have a collaboration of multiple AI artists named Sinful Sketch Society that you can support!

My models: sudachi(flat 2d), koji(2d), yuzu(light semirealistic), grapefruit(old ?????? model)

Supporters:

Thanks to my supporters Riyu, SETI, Jelly, Alessandro and Kodokuna on my patreon!

You can support me on my patreon, where you can get other models of me and early access to hassaku versions.

_____________________________________________________

Using the model:

Use mostly danbooru tags. No extra vae needed. For better promting on it, use this LINK or LINK. But instead of {}, use (), stable-diffusion-webui use (). Use "masterpiece" and "best quality" in positive, "worst quality" and "low quality" in negative.

My negative ones are: (low quality, worst quality:1.4) with extra monochrome, signature, text or logo when needed.

Use a clip skip 1 or 2. Clip 2 is better for private parts, img2img and prompt following. Clip 1 is visually better, because i assume, the model has more time and freedom there. I use clip2.

Don't use face restore and underscores _, type red eyes and not red_eyes.

Don't go to really high resolutions. Every model, like hassaku, get lost in the vastness of big images and has a much higher chance to greate, as example, a second ????.

_____________________________________________________

Loras:

Every LoRA that is build to function on anyV3 or orangeMixes, works on hassaku too. Some can be found here, here or on civit by lottalewds, Trauter, Your_computer, ekune or lykon.

_____________________________________________________

Black result fix (vae bug in web ui): Use --no-half-vae in your command line arguments

I use a Eta noise seed delta of 31337 or 0, with a clip skip of 2 for the example images. Model quality mostly proved with sampler DDIM and DPM++ SDE Karras. I love DDIM the most (because it is the fastest).

허가 범위
창작 허가 범위
온라인 생방송
혼합 진행
다운로드 허용
상업적 허가 범위
생성된 이미지를 판매하거나 상업적 목적으로 사용 가능
모델의 재판매 또는 융합 후 판매 허용
QR Code
SeaArt 앱 다운로드
모바일에서 AI 창작 여정을 계속하세요