AIBooru

Comments

Blacklisted:
Show 2 more comments
[hidden]

Saskweise said:

I am using AbyssOrangeMix2_hard

I use the model, the hash is the same, everything is the same, but the drawing style and quality are very different, the ones that come out of me are more realistic
I don't understand, you didn't do the AbyssOrangeMix2_hard merge, did you?

  • 0
  • Reply
  • [hidden]

    Kukar said:

    I use the model, the hash is the same, everything is the same, but the drawing style and quality are very different, the ones that come out of me are more realistic
    I don't understand, you didn't do the AbyssOrangeMix2_hard merge, did you?

    You have to use LoRAs like pronebone and zankuro_e6 in order to get these, I also used Hinata Hyuuga LoRA.

  • 0
  • Reply
  • [hidden]

    Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.

    If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.

  • 2
  • Reply
  • [hidden]

    sheev_the_senate said:

    Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.

    If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.

    thx thx thx

  • 0
  • Reply
  • [hidden]

    Ocean3 said:

    When you mention denoising strength, is that referring to the upscale setting or something else?

    It's the one in highres fix. I'm over 500 commits behind on the webui so it might work differently now. (New versions changed the api which broke my autoprompt scripts and I'm too lazy to fix it.)

  • 0
  • Reply
  • [hidden]

    antlers_anon said:

    It's the one in highres fix. I'm over 500 commits behind on the webui so it might work differently now. (New versions changed the api which broke my autoprompt scripts and I'm too lazy to fix it.)

    Ah, thanks - I'm not even too sure what that setting does and haven't used it 👌

  • 0
  • Reply
  • [hidden]

    Ocean3 said:

    Ah, thanks - I'm not even too sure what that setting does and haven't used it 👌

    It works almost exactly the same as in the img2img tab. I lowered it from the default (0.7 I think) to 0.6 to reduce the amount of mouths and nipples popping in random places (might be placebo though).

  • 0
  • Reply
  • [hidden]

    antlers_anon said:

    It works almost exactly the same as in the img2img tab. I lowered it from the default (0.7 I think) to 0.6 to reduce the amount of mouths and nipples popping in random places (might be placebo though).

    Good to know, thanks. Seems to just be built in to the generation I suppose

  • 0
  • Reply
  • [hidden]

    ForeskinThief said:

    bro used 80 different models to generate this masterpiece

    Was experimenting with a mass 'anime styled' model mix. I noticed my prompt had a lot of weight on specific things at the time which shifted how the model responded and made me notice how it does with certain things (architecture/food etc.). I've done a few tests since with the random merge and this is one of the results 👌

  • 1
  • Reply
  • [hidden]

    sheev_the_senate said:

    Judging by the filename the prompt was something like "peaceful landscape cinematic early morning flat grassy".

    im guessing grassy field or something, im gonna stop being lazy from now on using picture links and download with metadata instead

  • 0
  • Reply