Version 1 was trained on 512x512 images that were scaled down from HUGE images, but garbage in, garbage out and detail was lacking in the model. Version 2 was an all new model trained on 1024x1024 chunks of the original images. Some were over 8000 pixels per side!
Most of the dataset images were HUGE (over 5000 pixels per side), so I cut them up into bite (byte?)-size chunks, so no detail was lost. This was a huge improvement over version 1, which were just scaled down versions. The details are much more crisp, now. I also trained on the largest images I could (1024x1024).






