it is possible that this illustration was created using Img2img, you know, AIs like NAI have the option to insert an image (sketch or not) and then the AI redraws over it. It is very useful to generate new faces, or in case you want a very specific pose.
It's a futa/newhalf LoRA I trained. Unfortunately, it was pretty overfitted and couldn't do much aside from "from behind" tags, so I got rid of it. I trained a new LoRA that is overall more consistent with other poses as well, and can still do "from behind" tags. Here's the folder: https://mega.nz/folder/XMgjVDja#hSYAT_EsNSokys7oAIKm2A
Your new lora is amazing,it didn't even change my style!Thanks again!
Hey, i'm going crazy trying to get something close to your image. Even using the data you provide, i can't get nothing similar. I'm also using zankuro_e6, concept_pronebone and pyramithrapneuma, but no matter what values i assign, my images look nothing like yours. ¿Can you give me some clues?
Also, thanks for your amazing work!
Hi. I am not using zankuro_e6 in this image. Maybe that is the reason why you can not get something similar. Are you using AOM2?
You can find it on Novelai.net. I have a paid subscription, but I think it's worth it. On the image generation screen there should be an option to choose between three different diffusion options. All you have to do is choose the furry model and you're in business 👍 Feel free to use my images to get started
I prefer to use the UI driven stable diffusion, the ones online are often way to limiting for me, i do appreciate the recomendation though.
any idea were i can find the furry version of novelai?
You can find it on Novelai.net. I have a paid subscription, but I think it's worth it. On the image generation screen there should be an option to choose between three different diffusion options. All you have to do is choose the furry model and you're in business 👍 Feel free to use my images to get started
This is purely the Novelai furry model. I used the plms sampling which produces some damn good results after you tweak the settings. I enhanced the raw output and this is what I got (I'm pretty new at this so hopefully that made sense lol)
is this a merged model?? and if so witch models did you use??
This is purely the Novelai furry model. I used the plms sampling which produces some damn good results after you tweak the settings. I enhanced the raw output and this is what I got (I'm pretty new at this so hopefully that made sense lol)
It's a futa/newhalf LoRA I trained. Unfortunately, it was pretty overfitted and couldn't do much aside from "from behind" tags, so I got rid of it. I trained a new LoRA that is overall more consistent with other poses as well, and can still do "from behind" tags. Here's the folder: https://mega.nz/folder/XMgjVDja#hSYAT_EsNSokys7oAIKm2A
Really apreciate,You're my god! Can't wait to see your next work.
It's just a mix I made myself. Click the AnyDream tag to get info on how to make and use it. This image in particular was a variant using a 50/50 mix of Dreamlike and SimpMaker3k1.
Fantastic imagery. What is the model exactly? I've never heard of Anydream.
It's just a mix I made myself. Click the AnyDream tag to get info on how to make and use it. This image in particular was a variant using a 50/50 mix of Dreamlike and SimpMaker3k1.
The best one from behind i ever seen,besides,i’m curious about the second Lora “epoch-000016” you used,what is it used for?
It's a futa/newhalf LoRA I trained. Unfortunately, it was pretty overfitted and couldn't do much aside from "from behind" tags, so I got rid of it. I trained a new LoRA that is overall more consistent with other poses as well, and can still do "from behind" tags. Here's the folder: https://mega.nz/folder/XMgjVDja#hSYAT_EsNSokys7oAIKm2A
Hey, i'm going crazy trying to get something close to your image. Even using the data you provide, i can't get nothing similar. I'm also using zankuro_e6, concept_pronebone and pyramithrapneuma, but no matter what values i assign, my images look nothing like yours. ¿Can you give me some clues?
I use the model, the hash is the same, everything is the same, but the drawing style and quality are very different, the ones that come out of me are more realistic I don't understand, you didn't do the AbyssOrangeMix2_hard merge, did you?
You have to use LoRAs like pronebone and zankuro_e6 in order to get these, I also used Hinata Hyuuga LoRA.
I use the model, the hash is the same, everything is the same, but the drawing style and quality are very different, the ones that come out of me are more realistic I don't understand, you didn't do the AbyssOrangeMix2_hard merge, did you?
Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.
If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.
This image was actually generated by EyeAI, that person only reduced the quality and changed the positition she is sitting. Eye AI talked about this on Twitter too.
Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.
If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.
Model is a mix of Latte and OrangeCocoaMix50/50 at weighted sum 0.5. There is also a thrid model in a mix, but it just for generating some degenerative stuff, i'm pretty sure it's not needed for such visuals.
If I was walking through a forest and come across this scene, I would probably think they were going to eat my organs and offer my body to an Eldritch god.
It works almost exactly the same as in the img2img tab. I lowered it from the default (0.7 I think) to 0.6 to reduce the amount of mouths and nipples popping in random places (might be placebo though).
Good to know, thanks. Seems to just be built in to the generation I suppose
Ah, thanks - I'm not even too sure what that setting does and haven't used it 👌
It works almost exactly the same as in the img2img tab. I lowered it from the default (0.7 I think) to 0.6 to reduce the amount of mouths and nipples popping in random places (might be placebo though).
It's the one in highres fix. I'm over 500 commits behind on the webui so it might work differently now. (New versions changed the api which broke my autoprompt scripts and I'm too lazy to fix it.)
Ah, thanks - I'm not even too sure what that setting does and haven't used it 👌
bro used 80 different models to generate this masterpiece
Was experimenting with a mass 'anime styled' model mix. I noticed my prompt had a lot of weight on specific things at the time which shifted how the model responded and made me notice how it does with certain things (architecture/food etc.). I've done a few tests since with the random merge and this is one of the results 👌
When you mention denoising strength, is that referring to the upscale setting or something else?
It's the one in highres fix. I'm over 500 commits behind on the webui so it might work differently now. (New versions changed the api which broke my autoprompt scripts and I'm too lazy to fix it.)
@mellohi I fixed your tags, it seems like you've pasted the prompt into the tag field by accident. Please add self_upload to the tags if you generated this image. Also, you'll get better colors if you add a custom .vae (looks like you're missing one, but I might be wrong).
Judging by the desaturated colors you don't have a VAE (variational autoencoder) loaded. I've had the same issue in the past too. You should download one of those and try generating with it selected - the colors look much better that way. vae-ft-mse-840000-ema-pruned is a decent choice, but for anime style images the one from Anything-v3 or NovelAI would probably work better. If you choose either of those two latter ones you'll need to launch automatic-1111 with the --no-half-vae argument to avoid occasional completely black images.
Thank you for sharing your experience! Now I know how it works 🙂
Judging by the desaturated colors you don't have a VAE (variational autoencoder) loaded. I've had the same issue in the past too. You should download one of those and try generating with it selected - the colors look much better that way. vae-ft-mse-840000-ema-pruned is a decent choice, but for anime style images the one from Anything-v3 or NovelAI would probably work better. If you choose either of those two latter ones you'll need to launch automatic-1111 with the --no-half-vae argument to avoid occasional completely black images.
Is there any link to this Bastard model? Or any suggestions on what tag name would this model get? For the other model tag, you can tag your posts with abyss_orange_mix.
Hey this is awesome man! I'd like to try to make some too, do u have a tutorial or anything that can teach me how? I could send it to you after.
Hi mate thanks :) i dont really know, u should go on the unstable diffusion discord and start talking with people thats how i learned. I'm basically doing img2img batch with a base video i'm also doing. U can pm me on discord: Sambalek#8026 Or tiktok: @Proteinique
Model used : 70% AbyssOrangeMix2_hard 30% Bastard_v2_LiveAction
Is there any link to this Bastard model? Or any suggestions on what tag name would this model get? For the other model tag, you can tag your posts with abyss_orange_mix.
This is the oldest post I could find of it. They say they got it from a discord server which, if true, would make the genuine source difficult to track down.
It's a bit strange, it's as if Cream's head was put on the body of a different character. That aside, I don't see why this image would be considered low quality, so its approved.
Oh come on, this was made before novel AI goes public, and long before better alternatives to novel AI are released. it's normal that the hands are not that good.
interesting part is that this picture is pretty because i set up stable diffusion wrong.
desaturation was partially caused by "auto" in VAE settings. After I set it to nai.vae, it became more saturated (still good tho: https://files.catbox.moe/850cnu.png ). You can try disabling VAE (set to "auto" or "none" in settings) for more desaturation.
More negprompts to the god of negprompts! Also I didn't know you can just describe the image, I thought you need to just list tags and hope model will understand what you want.
The model used was a 50% merge of Protogen x3.4 and Anything-3.0. Believe it or not, but the "aroused" and "horny" keywords are there for the facial expression. "Dreamlikeart" was included by mistake when I used an old prompt with a different model (the previous model I used was a 50% merge of Dreamlike and Anything-v3).
Just to be clear: no it's not Kagome from Inuyasha and if you watched the show you'd know. Stable Diffusion is not great at re-creating established characters (which indeed sucks)