Detail
Rekomendasi
Astigmatism -0.6
Astigmatism +0.6
Astigmatism -0.5b
Astigmatism +0.5b
Astigmatism +0.5
Astigmatism -0.5
Astigmatism 0.4b
Astigmatism 0.4a
Astigmatism +0.3
Astigmatism -0.2
Astigmatism 0.2
That Special Face
Astigmatic Correction 0.1
Unsettling 2.1 - Baked
Unsettling 2.0
Astigmatism (formerly 'Semantic Shift')

Astigmatism (formerly 'Semantic Shift')

18
1
0
#Style
#adherence

17-01-2025 START

Ok, so the newest astigmatism positive, +0.6 is here. It's really really good, but as in all things, I recommend blending with 0.5 to attenuate overfitting and truly get the best results possible. I'll look at a lora merge later and see if I can make an easy package with an "optimal" astigmatism at this stage.

Hope you all enjoy. I'm working on a really large negative for 0.6, but I need more buzz so it will take a little bit to train, but rest assured, it is on the way, and I think it will be quite a big jump.

17-01-2025 END





---

Post 0.5b, I recommend just playing with 0.5b and/or -0.5b.

When using the negative, be sure to crank CFG up to start, as this is the main advantage it affords you.

It also, in small amounts, can increase creativity, but broadly, +0.5b is the powerhouse, despite having a much smaller dataset

below is stuff I wrote previously for anything pre 0.5b:
---------------------
I recommend the following mix for anyone starting out (I will release some sort of a mixed LoRa sometime in the next week that will require less VRAM than loading 4 LoRas lol)

Astigmatism +0.5
Astigmatism -0.5
Astigmatism +0.4b
Astigmatism -0.2

The +'s at 0.33 each
The -'s at -0.33 each

This has to do with overfitting in the training process, and errors on my part. Rather than address these errors with limited resources directly (which I cannot do as this would require many many iterations of the LoRas that I cannot afford, in order to test and find the optimal setups) using blends mitigates overfitting and generally improves perfermance, as you can see from the plethora of merge Checkpoints on Civitai, including ones which simply merge newer versions of a model into the older version.

Basically, older versions may "understand" something better than a newer version, and vice versa, but as long as your version are MOSTLY improved, then the merge process will over time lead to the model becoming a better generalizer, and this particular lora, which directly target generalization and capabilities of the model, is no exception.

Love yall, and this community.


If anyone wants to collaborate on further training who has the resources, please contact me. I have had a great deal of success in improving prompt adherence and I suspect this can be massively grown with a solid community effort.


Carefully examine the weights used to know how to mess with this LoRa. Think of it like adjusting the fooocu on a lens that you are looking through. Every prompt and Checkpoint combination will have different needs, but ultimately, most of them can be dialed in such that adherence begins to work within a certain delimiter where it wasn't working previously.

I suppose I will have to do a video on the "Why" behind this soon as my adhd and time constraints make writing it, as I want to, beyond my current capacity. But a video, I probably can do, although it will be... chaotic.




This model is based on work I did on my "Unsettling" Lora. It uses some of the images generated there, along with subsequent images made using, again, the LoRa progeny of those, as well as the techniques I experimented with.

Basically, the goal of this LoRa is to "semantically shift" SDXL such that terms that have a set meaning are entirely changed in an internally consistent manner. I used a technique to do this partially in the Unsettling LoRa, although it was overtrained, and became intrigued by the idea that "good" prompts remain "good," albeit on a different axis, even if the internal understanding of them "shifts" within a given model. In other words: a unique and interesting prompt can create unique and interesting images in multiple new and unique themes if you play with the brain of the model in a directed way.

How did I do this?

I found areas of overtraining within SDXL and targeted them. Mona Lisa, Pillars of Creation, etc, and I redirected them to new images. As I suspected, this had ripple effects in the way the entire model perceives the concepts connected to the images modified, and these effects are quite substantial.


UPDATE

Since this started, the purpose of this LoRa has changed substantially to basically helping improve SDXL's overall prompt adherence and winrate, while using very small training datasets targeting the areas of overfitting in the model and teaching it to generalize them.

A side effect of this is that it is a lot easier to produce images at arbitrary resolutions.

Lihat terjemahan

Rating dan Ulasan

-- /5
0 rating

Belum menerima penilaian atau komentar yang cukup

avatar
Pengumuman
2024-04-01
Memposting model
2025-01-19
Perbarui informasi model
Detail model
Jenis
LORA
Waktu publikasi
2025-01-19
Model Dasar
SDXL 1.0
Perkenalan versi

Expanded the dataset for training by 50%, and used several training methods, using a preference based merge as the final lora.

Cakupan Lisensi
Model Source: civitai

1. Hak untuk model yang diposting ulang adalah milik pembuat aslinya.

2. Pencipta asli yang ingin mengklaim model harap hubungi staf SeaArt AI melalui saluran resmi. Klik untuk mengklaim

Cakupan Lisensi Penciptaan
Gambar Online
Melakukan Penggabungan
Izinkan Unduhan
Lisensi Komersial
Gambar yang dihasilkan dapat dijual atau digunakan untuk tujuan komersial
Izinkan model dijual kembali atau dijual setelah penggabungan
QR Code
Unduh SeaArt App
Lanjutkan perjalanan kreasi AI Anda di perangkat mobile