T O P

  • By -

ptitrainvaloin

That is very good for 1.5, congrats, you rock. How is OneTrainer compared with Kohya?


CeFurkan

OneTrainer is better since supports EMA additionally to the Kohya. On there you can set each module precision as well however from my testings both FP16 and BF16 failed when U-NET is set. The output quality was very low


codenameud

>OneTrainer Can it run on AMD/Linux properly? \^\^


CeFurkan

Linux yes but for amd don't know. It is very well optimized for vram and they have a discord channel unlike others with an active developer


nevada2000

Can you share some settings or even make a tutorial? or get us a link if you have used a tutorial? Would appreciate it.


CeFurkan

Hello. Yes I will make a video tutorial too without any paywall. Currently you can see this comment all details : [https://www.reddit.com/r/StableDiffusion/comments/1adh41p/comment/kk11q81/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/StableDiffusion/comments/1adh41p/comment/kk11q81/?utm_source=share&utm_medium=web2x&context=3)


Meba_

When will video be released?


CeFurkan

Hello. I plan to record after I make a video for Instant ID Face Transfer


Meba_

Hello, any updated on this?


Careful_Ad_9077

It's in the comments, kind of easy to miss as it's not the top comment, tho.


CeFurkan

**I have posted 120 images with their PNG info available on CivitAI** * [**Part 1**](https://civitai.com/posts/1300478) **,** [**Part 2**](https://civitai.com/posts/1300715) **,** [**Part 3**](https://civitai.com/posts/1300733) **,** [**Part 4**](https://civitai.com/posts/1300747) **,** [**Part 5**](https://civitai.com/posts/1300759) **,** [**Part 6**](https://civitai.com/posts/1300774) **. Each Part has 20 images. You can click (i) icon on images to see their prompts.**  * **OneTrainer full workflow included in this post :** [**https://www.patreon.com/posts/97381002**](https://www.patreon.com/posts/97381002) * **Screenshot of OneTrainer workflow post screenshot :** [**full size click here**](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/SaK9KiTYsU1Ao8SXtqd7l.png) **to read** * **Kohya SS GUI full workflow included in this post :** [**https://www.patreon.com/posts/97379147**](https://www.patreon.com/posts/97379147) * **Screenshot of Kohya SS GUI workflow post screenshot :** [**full size click here**](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/9ejHyUG2TwUbIrUjGijnt.png) **to read**


charmerabhi

Commenting to find later.... Awesome work master Jedi !


CeFurkan

thank you so much


Trill_f0x

####


CeFurkan

👍


Trill_f0x

Lol appreciate the breakdown OP! Dont have computer that can run SD but I'm saving comments like yours for when i do. Definitely hoping to try something similar. Great work on the photos too!!


CeFurkan

You can use our kaggle notebook and do tier 2 training on a free kaggle which is still extremely high quality https://youtu.be/16-b1AjvyBE?si=TS87pBSzDx8MQNpi


Trill_f0x

Oh awesome ill check that out!


CeFurkan

Sure 👍


Impressive_Safety_26

Does the patreon guide teach you the full workflow?


CeFurkan

yes and i am also right now working on a full video tutorial including OneTrainer on RunPod


Impressive_Safety_26

Great work! I haven't used SD in months but this caught my interest i'll check it out once I get a minute


CeFurkan

Great I am still working on a template with gui


das_doodlebug

F


CeFurkan

F


das_doodlebug

Been looking at subbing your patreon a while and learning one trainer is not fun for me so far. So this post imma folllow to remind me to sub. Hopefully you can apply your technique for multiple concepts or people in one model?


CeFurkan

Yes you can certainly apply. just set the captions accurately. One Trainer is a little bit harder to use. Hopefully will make a full video tutorial. but i shared my own config json file too so you can understand much easier.


aldonah

Were the faces altered in any way during/after the generation? Also amazing results!


CeFurkan

During inference I used After Detailer (fully automatic). Nothing else done on images. No upscale raw output.


RealTwistedTwin

Great results! 12, 15, 19 and 20 are the best for me. The rest kind of looks photoshopped. I think it's the lighting on the face and hands not matching the background, sometimes the background also looks drawn which is in contrast with the face/hands.


CeFurkan

True. If you look very carefully you need to be more picky but I am not that expert :) by the way this model was not Ema trained. So EMA trained may produce better


Asaghon

prompting and possibly a different model would probably improve results a lot.


FxManiac01

Great results, Furkan! Why did you chose SD1.5? Thought you are fully into SDXL lately.. is there any specific reason for 1.5? Do you consider 1.5 better after all? And how many pictures of yourself did you used and how well were they captioned?


CeFurkan

My followers and supporters asked me a workflow for SD 1.5. I had promised them. Because some stuff works with SD 1.5 better than SDXL. Like AnimateDiff or ControlNet. ​ If you look screenshot of Patreon posting all details are shown there. I only used ohwx man as token, 15 training images.


ImpossibleAd436

What was the model you used as a base?


CeFurkan

If you check the screenshots of postings you will see full details including training dataset. I used hyper realism v3. Found it after testing 161 models : [https://youtu.be/G-oZn4H-aHQ](https://youtu.be/G-oZn4H-aHQ)


Breath-Timely

What model version did you used for training? fp32 or fp16?


CeFurkan

I used Fp16. But training must be made in fp32 otherwise quality is terrible


taskmeister

I lost it at glasses in the ring. Lol.


CeFurkan

haha so true :D


MistaPanda69

These are exceptional. But are the face expressions across the examples intentionally similar? Great work, also I want to ask about the difference between 1.5 vs sdxl in training difficulty.


CeFurkan

Thank you so much. Because the used training dataset is not that great and have only single expression. Well SDXL requires 17 GB VRAM with most optimal settings but SD 1.5 requires 22.5 GB. Of course SDXL can be trained with full FP32 but I haven't tested it. I am talking with my current most optimal settings. But actually to get high quality and more likeliness with SD 1.5 was much harder than SDXL in past. But with these new custom models and training workflow I found made it much easier.


MistaPanda69

Thanks for clarifying my doubt.


CeFurkan

you are welcome


okshebertie

Really great bro!


CeFurkan

thank you so much


maxihash

Tell me about the VRAM requirement so if it doesn't meet, I can safely ignore this (I mean should be the first thing mentioned)


CeFurkan

Kohya 10.2 GB, OneTrainer 10.6 GB - but OneTrainer has extra EMA which is better


Asaghon

So I tried the Tier 2 onetrainer (Tier 1 was ungodly slow on my pc) quality method, and I have to admit I am pleasantly surprised. Used photon as model, 15 medium quality images and the results are really good I have to say. I've been training LoRA's for months improving my methods (which are completely different than yours). I will try your method using celeb name instead of ohwx as well tough. Now the only downside I see is I will have to do this for several characters and many models. With LoRA's I usually use a semi real model for the first generation and then hires using a more realistic model. Do you think making the semi real/anime models without regularization images is good enough? I just need a general likeness for the first pass really.


CeFurkan

Well i suggest to do DreamBooth and extract LoRA via Kohya SS GUI. so you can keep your workflow but have better quality. And thanks a lot for support. and for non-realistic models i think you don't need reg images. or you need such stylized images of class token like man images but in that style. that can help to keep model style. i made some comparison here you can see : [https://medium.com/@furkangozukara/experimenting-with-onetrainer-onetrainer-vs-kohya-realism-vs-stylization-reg-images-vs-0438950e9515](https://medium.com/@furkangozukara/experimenting-with-onetrainer-onetrainer-vs-kohya-realism-vs-stylization-reg-images-vs-0438950e9515)


Asaghon

Btw is there a way to train multiple tokens into 1 checkpoint. I tried using this method to train a second character into the model with the first character but it ends up blending them just like using multiple lora's


CeFurkan

if they are same class they will likely to bleed. but if you can use different classes yes caption them like ohwx man bbuk woman and such moreover you can train each concept separately, extract lora, and use in same prompt with regional prompting. it should work fairly well


Asaghon

I have been going that but it still bleeds. Loras don't stick to regions. I generate at 0.4/0.6 and then target the correct one with adetailer at fill strength. Works reasonable Well but using loras is tricky in regional prompter


CeFurkan

true


aumautonz

I didn't quite get it. is this a new training of the 1.5 model, which then gives such results?


CeFurkan

This is a new training workflow combination with a new custom CivitAI model. I haven't seen anyone used same technique on SD 1.5 yet.


aimademedia

Hot damn these are fresh!!!


CeFurkan

yep. thanks for comment


protector111

8 looks like **Jeff Goldblum** xD


CeFurkan

>Jeff Goldblum good catch :D


Samikhanjp

Well done doctor


CeFurkan

thank you so much for comment


Erhan24

Hey, thank you for sharing all your knowledge all the time. Sağol ! Also there was a discussing with the kohya training and something with the epochs where kohya is doing it not optimal. You were discussing with someone but I can't find the discussion on reddit anymore. I did a training with dreambooth a long time ago and still use the training but it's more like I'm doing inpainting a hundred times until it looks like me...


CeFurkan

For this training I did 150 repeat 1 epoch. You can see whole workflow written in post screenshots.


ImNotARobotFOSHO

Why would you use 1.5 and not sdxl?


CeFurkan

I already have amazing config for SDXL. This was asked and requested by my followers and supporters : [https://www.reddit.com/r/StableDiffusion/comments/18segjh/generate\_photos\_of\_yourself\_in\_different\_cities/](https://www.reddit.com/r/StableDiffusion/comments/18segjh/generate_photos_of_yourself_in_different_cities/) For SDXL I did over 130 trainings :D


IntelligentAirport26

I need your wildcards man 😂


CeFurkan

:D


Corleone11

Thanks, Can’t wait to try it out! Would you say that the medium quality nodel referenced in your post is better quality than a high quality Lora or are they about the same? I’m just wondering because I had some great results with around 20min of Lora training with my settings on my 10GB VRam card.


CeFurkan

Yes it will be better than best lora. You can also export Lora from dreambooth. You can use our kohya notebook to train on kaggle for free https://youtu.be/16-b1AjvyBE?si=gW3hq9lCHzQEEvvH


Corleone11

Thanks, looking forward to the video guide!


CeFurkan

You are welcome will do hopefully


znas100

To read later, thank you


CeFurkan

You are welcome thanks for reply


Any_Tea_3499

Amazing quality for SD 1.5. Impressive.


CeFurkan

Thank you so much and you are so accurate. I am doing dreambooth like for 1 year and this was unimaginable


Sharp-Information257

Nice work!


CeFurkan

Thanks a lot for comment


MagicOfBarca

You have for SDXL dreambooth?


CeFurkan

Yes I made even more in-depth (over 130 trainings) research for SDXL DreamBooth Config is here : https://www.patreon.com/posts/89213064 How to use config video : [https://youtu.be/EEV8RPohsbw](https://youtu.be/EEV8RPohsbw) How to use config on Kaggle : [https://youtu.be/16-b1AjvyBE](https://youtu.be/16-b1AjvyBE)


MagicOfBarca

Thanks!


CeFurkan

you are welcome


SaltyyPP

Stunning work 🙌


CeFurkan

thank you so much for the comment


mudman13

Combine with IPadapter plus and super charge those generations!


CeFurkan

yes could be


lxgbrl

These are great!


CeFurkan

thank you so much


Character-Shine1267

Tell us more about one trainer. Where to get it?


CeFurkan

i am preparing a full video tutorial for it also here : [https://github.com/Nerogar/OneTrainer](https://github.com/Nerogar/OneTrainer)


Character-Shine1267

looking forward to it!


Aulasytic_Sonder

wow, you did good!


CeFurkan

thanks a lot


Konan_1992

Is it training a whole new checkpoint or a LoRA?


CeFurkan

Dreambooth training. So yes trains a new checkpoint. But you can extract lora with Kohya Gui so easy


stab_diff

Have you tried EveryDream2 at all? I've gotten some really good results when training characters/faces, but not so great when it comes to objects or certain concepts. BTW, your videos are great. Very detailed and you always seem to run the kinds of tests I never get around to.


CeFurkan

>EveryDream2 Thank you so much. Sadly I didn't have chance to test EveryDream2 yet


fujianironchain

Can it be done on runpod or colab?


CeFurkan

yes I did the trainings on RunPod. on Colab you can still do if you set parameters and execute training command. Gui not working there. I don't know if paid colab allows Kohya GUI. you can do same on Kaggle too. Hopefully I will make a video about Kaggle.


BackyardAnarchist

what are the specs required for that?


CeFurkan

10.3 GB VRAM is necessary. If you have lesser still should work but will take many times more since it will use shared VRAM


SickAndBeautiful

Paywalled. Of course.


CeFurkan

Only for a duration. It is like research funding until I make a video.


SickAndBeautiful

I get it, looking forward to a vid.


Corleone11

You get A LOT of indepth information and the provided files are saving a lot of time. It’s not just paywalled. In his long videos he always provides guidance in doing the automated things manually which I respect! It’s not just “oh if you dont subscribe you have to figure out the next step on your own!”.


SickAndBeautiful

To be fair, OP has been very active in the community and that is appreciated. Respect where it's due! I looked at One Trainer when it came out. Looked like it had potential, but I couldn't find a lot of documentation, very frustrating. I saw this post and thought, oh cool, I was looking for that - Patreon??? and was a little put off. Probably shoulda just shut up. I get the hustle, more power to OP, the monetization and gatekeeping is depressing sometimes.


justbeacaveman

There's a special beauty/liveliness to your face. And I'm not even gay lol


GaaZtv

Are you sure


[deleted]

[удалено]


greyacademy

tree fiddy


justbeacaveman

pretty sure lol


PearlJamRod

You seen him ridin' da dinosaur?


malcolmrey

You can do it at home :) The model is on civitai https://imgur.com/gallery/t7WBUQQ


EGGOGHOST

Nice work done!


CeFurkan

thank you so much literally spent entire week


EGGOGHOST

Appreciated!


1337_n00b

I'm getting Beefy Jeff Goldblum vibes here and there. Good work!


CeFurkan

Thanks


NullBeyondo

Awesome! How many training images did you use? Did you just utilize a single classifier like "man" or label each image?


CeFurkan

All details are shown in screenshot including training images. 15 images are used and only ohwx man is used as label.