I use the model, the hash is the same, everything is the same, but the drawing style and quality are very different, the ones that come out of me are more realistic I don't understand, you didn't do the AbyssOrangeMix2_hard merge, did you?
I use the model, the hash is the same, everything is the same, but the drawing style and quality are very different, the ones that come out of me are more realistic I don't understand, you didn't do the AbyssOrangeMix2_hard merge, did you?
You have to use LoRAs like pronebone and zankuro_e6 in order to get these, I also used Hinata Hyuuga LoRA.
Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.
If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.
Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.
If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.
This image was actually generated by EyeAI, that person only reduced the quality and changed the positition she is sitting. Eye AI talked about this on Twitter too.
Model is a mix of Latte and OrangeCocoaMix50/50 at weighted sum 0.5. There is also a thrid model in a mix, but it just for generating some degenerative stuff, i'm pretty sure it's not needed for such visuals.
If I was walking through a forest and come across this scene, I would probably think they were going to eat my organs and offer my body to an Eldritch god.
When you mention denoising strength, is that referring to the upscale setting or something else?
It's the one in highres fix. I'm over 500 commits behind on the webui so it might work differently now. (New versions changed the api which broke my autoprompt scripts and I'm too lazy to fix it.)
It's the one in highres fix. I'm over 500 commits behind on the webui so it might work differently now. (New versions changed the api which broke my autoprompt scripts and I'm too lazy to fix it.)
Ah, thanks - I'm not even too sure what that setting does and haven't used it 👌
Ah, thanks - I'm not even too sure what that setting does and haven't used it 👌
It works almost exactly the same as in the img2img tab. I lowered it from the default (0.7 I think) to 0.6 to reduce the amount of mouths and nipples popping in random places (might be placebo though).
It works almost exactly the same as in the img2img tab. I lowered it from the default (0.7 I think) to 0.6 to reduce the amount of mouths and nipples popping in random places (might be placebo though).
Good to know, thanks. Seems to just be built in to the generation I suppose
bro used 80 different models to generate this masterpiece
Was experimenting with a mass 'anime styled' model mix. I noticed my prompt had a lot of weight on specific things at the time which shifted how the model responded and made me notice how it does with certain things (architecture/food etc.). I've done a few tests since with the random merge and this is one of the results 👌