Model is: Save as float16 but only at the last step (for same results, float32 gives sometimes dissimilar results) [Add Difference] Stable Diffusion 1.5 + (Anything 3.0 - Stable Diffusion 1.4) [M=1] -> mergeStep1 [Add Difference] mergeStep1 + (Zeipher_F222 - Stable Diffusion 1.5) [M=1] -> mergeStep2 [Weighted Sum] mergeStep2 + TrinArt2 [M=0.25] -> Final Model
The model you have is really good, i tried to make it for myself using the instructions in the model name but i don't get the same hash, it does produce similar results though. Was wondering if you can give some more info on it ๐ค
The model you have is really good, i tried to make it for myself using the instructions in the model name but i don't get the same hash, it does produce similar results though. Was wondering if you can give some more info on it ๐ค
Maybe I did mess up somewhere, since you're not the first person getting a different hash. If you want, some anons in the thread wanted it, so it's on mega here: `file/nZtz0LZL#ExSHp7icsZedxOH_yRUOKAliPGfKRsWiOYHqULZy9Yo`
Maybe I did mess up somewhere, since you're not the first person getting a different hash. If you want, some anons in the thread wanted it, so it's on mega here: `file/nZtz0LZL#ExSHp7icsZedxOH_yRUOKAliPGfKRsWiOYHqULZy9Yo`
Legend. Thanks :).
Iduno if saving it as float16 changes the hash, but probably? So that may be what it is. I Assume thats why its only 2GB vs most of my other models which are 3.9
It's Berry's mix with F222 instead of F111 Edit: At least i guess so, same models included. Edit 2: Just tested it to be sure, seems i am correct: Result "BerryMix14.ckpt [2e41a25c]". Worth it to note that F222 is actually based on SD 1.5, so subtracting that in the "NAI + (F111 - SD1.4)" step instead of SD 1.4 would probably lead to better (but different) results.
It's Berry's mix with F222 instead of F111 Edit: At least i guess so, same models included. Edit 2: Just tested it to be sure, seems i am correct: Result "BerryMix14.ckpt [2e41a25c]". Worth it to note that F222 is actually based on SD 1.5, so subtracting that in the "NAI + (F111 - SD1.4)" step instead of SD 1.4 would probably lead to better (but different) results.
This. I think there was a point where I fucked up and made one of the steps with the clipskip 2. Don't know if it makes any difference, but since it works fine, I didn't bothered to fix.
That's it - Just a Berrymix with SD 1.4 and F222. Follow the steps normally, just change the F111 for the F222.
I'm having a hard time recreating this prompt, do you generate at 1024x1920 or upscale there? Any other ideas for settings or variables I might be missing?
I'm having a hard time recreating this prompt, do you generate at 1024x1920 or upscale there? Any other ideas for settings or variables I might be missing?
I'm generating at 512x960 and then upscaling. I used CLIP skip: 1, no hypernetwork and latent scaling. Also check if you don't have "Use old emphasis implementation" selected. Nothing else comes to mind. I checked the prompt just in case and it gives me the exact same image.
I'm generating at 512x960 and then upscaling. I used CLIP skip: 1, no hypernetwork and latent scaling. Also check if you don't have "Use old emphasis implementation" selected. Nothing else comes to mind. I checked the prompt just in case and it gives me the exact same image.