I use the model, the hash is the same, everything is the same, but the drawing style and quality are very different, the ones that come out of me are more realistic I don't understand, you didn't do the AbyssOrangeMix2_hard merge, did you?
I use the model, the hash is the same, everything is the same, but the drawing style and quality are very different, the ones that come out of me are more realistic I don't understand, you didn't do the AbyssOrangeMix2_hard merge, did you?
You have to use LoRAs like pronebone and zankuro_e6 in order to get these, I also used Hinata Hyuuga LoRA.
Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.
If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.
Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.
If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.
This image was actually generated by EyeAI, that person only reduced the quality and changed the positition she is sitting. Eye AI talked about this on Twitter too.
Model is a mix of Latte and OrangeCocoaMix50/50 at weighted sum 0.5. There is also a thrid model in a mix, but it just for generating some degenerative stuff, i'm pretty sure it's not needed for such visuals.
If I was walking through a forest and come across this scene, I would probably think they were going to eat my organs and offer my body to an Eldritch god.