Stable Diffusion (AutismMix SDXL + a lot of LoRAs + many hours of inpainting) + Post-processing
As we all know, getting AI to draw hands can be very challenging, especially when those hands are holding an object. And since I can't draw, all this time I had to use the "Inpaint Sketch" to at least somehow show the AI what kind of hands I needed. The problem is that sometimes this method can take several hours, and the result is.. well.. yeah :/
So, I've been wanting to try something for a quiet some time: what if I just use 3D hands, and then blend them into a picture using "inpaint"? Well, I finally tried it, and as expected, this method turned out to be much faster and more precise. Although, I suppose any sane person in my place would use Blender or something like this, but for now I'm too lazy to study Blender, so I just used Unity (because I'm a Unity developer)
Time for a new haircut :)
Stable Diffusion (AutismMix SDXL + a lot of LoRAs + many hours of inpainting) + Post-processing
As we all know, getting AI to draw hands can be very challenging, especially when those hands are holding an object. And since I can't draw, all this time I had to use the "Inpaint Sketch" to at least somehow show the AI what kind of hands I needed. The problem is that sometimes this method can take several hours, and the result is.. well.. yeah :/
So, I've been wanting to try something for a quiet some time: what if I just use 3D hands, and then blend them into a picture using "inpaint"? Well, I finally tried it, and as expected, this method turned out to be much faster and more precise. Although, I suppose any sane person in my place would use Blender or something like this, but for now I'm too lazy to study Blender, so I just used Unity (because I'm a Unity developer)
Workflow: https://mega.nz/folder/3zZnTDaA#2Q2P833EUOwi0cUsLYwJ-g