Resized to 68% of original (view original)

Prompt | masterpiece, best quality::0.6], uncensored, simple background, 1girl, solo, heterochromia, blue eyes, pink eyes, white pupils, white hair, ([blue hair | light blue hair]:0.75), (multicolored hair:0.75), messy hair, long hair, bangs, hair between eyes, ahoge, [dog ears | fox ears], dog girl, dog tail, animal ear fluff, skin fang, blush, thinking, question mark ahoge, shortstack, curvy, large breasts, nipples, sweat, shiny skin, tail wagging, looking at viewer, upper body, head tilt, hand on own chin, <lora:Thomasz-A4s1a-Illu01:1>, <lora:Tylwing-A4s1c-Illu01:0.75> |
---|---|
Negative prompt | worst quality, lowres, artist name, watermark, censored, bar censor, mosaic censoring |
Anonymous 06/13/25(Fri)00:21:46 No.8625229
00011-6998978833c.png (2.29 MB, 1248x1824)
>>8624863
seconding >>8625207, i'm interested to know how your are going about your finetuning; i've got some questions too
1) do you have a custom training script or are you using an existing one?
2) what is the training config you have setup for your finetuning, and is there any particular factors that made you consider those hyperparameters?
3) in terms of data preparation, is the prep for finetuning different from training loras? do you do anything special with the dataset?
4) i too am using a 3090 >>8625025, how much vram usage are you running at when performing a finetune at your current batch size?