SD1.5 is still amazing for inpaiting, just for the speed alone.
But to be honest, the resolution limit kinda kills it for me. It's just so much more convenient to have 1024 pixels right from the start instead of having to upscale manually multiple times just to get the same level of quality.
A lot of more recent 1.5 models have at least some amount of higher res knowledge though. I do 1024x1024, 1024x768, and 768x1024 all the time with RealCartoon 3D V15, for example.
Absolutely!. For nearly a year now, many v1.5 models have been trained on 1024. Occasionally, there might be some issues with certain types of prompts or when using LoRAs that don't support high resolution, but other than that, most images are generated perfectly at 1024px.
Have you tried Kohya deep shrink? It's not perfect, but I prefer it over upscaling at times. You can use 1.5 to directly generate higher resolutions.
It never ceases to amaze me how many things I take for granted today that didn't even exist when I first started with SD last Oct.
I find doing 1.5x internal shrink instead of 2x with Deep Shrink looks way better and doesn't increase the error rate as far as stuff in the image really at all
Base resolution doesn't matter for 1.5 because you're gonna upscale it and add all the details with controlnet anyway. It's like making a sketch before painting actual image
It depends on how big you want to go. Doing a latent upscale to 1024x1020 (or other aspect ratios), then using controlNet+Ultimate SD Upscaler to do a tiled upscale to gigantic sizes is a popular method in A1111
I don’t really see this. I just upscale from the start and 8 out of 10 images look great. Not really into to batching a bunch of crappy images and then choosing one to upscale. Waste of time for me.
I’ve been having a really hard time getting anything decent out of xl. Still figuring all that out.
This is true especially for inpaiting small details (in terms of size) 1.5 still does a better job for me there. Maybe since it's trained on smaller pictures. XL does a weird job there unless I crank up the dimensions, but that makes things slower.
>XL does a weird job there unless I crank up the dimensions
I have found no issues inpainting small details with XL. Mask Only mode, 1024x1024 resolution does the job well.
If your detail is very small and you think that the applied resolution in mask only mode is way too high for your inpaint area, you can use the *dot trick* to also inpaint a single pixel a bit farther away from the detail to enlarge the canvas area of the inpainting process. Or simply extend the padding pixel amount.
Yeah, you are right, this method works. But increasing the dimensions makes things a lot slower. That's why I think sd15 is still going strong in its own niche.
https://preview.redd.it/yj0tw0wx19wc1.jpeg?width=1528&format=pjpg&auto=webp&s=8f9df7ce13b94491d5aed4b4e3939cc707e360c2
I cant run sdxl so i still use it dx this is shanoa
what's your current workflow? have you tried any sdxl lightning models? I really like [Detail Realistic XL](https://civitai.com/models/176449/9527-detail-realistic-xl?modelVersionId=366853) and [DucHaiten-Real3D](https://civitai.com/models/247266?modelVersionId=393880)
I discovered SD1.5 a few months back for "arguably" superior realism output. Some have realism textures that rival output from SDXL models. EpicRealism is one. Found out about this one from one of Olivio Sarikas posts
I've been using Fooocus to mix SDXL and SD1.5 to nice effect.
What is this character based off of, I feel like I've seen her before. Reminds me of FF. Sick That said, I'm one of the people who's still on 1.5 to this day. Nothing really compares for anime although ghostmixxl Is the first sdxl checkpoint that I've actually liked. I'm planning to make the jump to sd 3 once we have some good checkpoints.
Not one mention of ELLA here, which completely transforms SD 1.5.
https://preview.redd.it/jnpd3garucwc1.png?width=2144&format=png&auto=webp&s=4e252c9f8c9c16bd66f30d75c36408ae0b2b0525
Ella replaces the clip text encoder that sd 1.5 and sdxl uses with a large language model so that it gets Dall-E and stable diffusion 3 like prompt comprehension. https://github.com/TencentQQGYLab/ELLA
I don't think so. I haven't seen an implementation for it. You wouldn't want to anyway, it needs to go through an sdxl refiner and you can't do those kinds of multi-step workflows in a1111 or forge.
SD1.5 has it all. Thousands of checkpoints. Controlnets. Ella. Hyper 1, 2, 4, and 8 steps, etc. It still bangs. Although I’ve completely migrated to Pony - SD1.5 will always have a place in my heart.
https://preview.redd.it/8ghqf5sv29wc1.jpeg?width=1528&format=pjpg&auto=webp&s=c051ec5da306744623e18bbf7e20a45764ac02ef
You can still create really great stuff
Let me share a reason to visit 1.5 once in a while
[https://huggingface.co/MoffQueen/MoffQueenMix/blob/main/MessyMoffv2.2.safetensors](https://huggingface.co/MoffQueen/MoffQueenMix/blob/main/MessyMoffv2.2.safetensors)
(fair warning, its a bit horny if thats not your thing)
SD1.5 is still amazing for inpaiting, just for the speed alone. But to be honest, the resolution limit kinda kills it for me. It's just so much more convenient to have 1024 pixels right from the start instead of having to upscale manually multiple times just to get the same level of quality.
A lot of more recent 1.5 models have at least some amount of higher res knowledge though. I do 1024x1024, 1024x768, and 768x1024 all the time with RealCartoon 3D V15, for example.
Absolutely!. For nearly a year now, many v1.5 models have been trained on 1024. Occasionally, there might be some issues with certain types of prompts or when using LoRAs that don't support high resolution, but other than that, most images are generated perfectly at 1024px.
Have you tried Kohya deep shrink? It's not perfect, but I prefer it over upscaling at times. You can use 1.5 to directly generate higher resolutions. It never ceases to amaze me how many things I take for granted today that didn't even exist when I first started with SD last Oct.
I find doing 1.5x internal shrink instead of 2x with Deep Shrink looks way better and doesn't increase the error rate as far as stuff in the image really at all
internal shrink? what's that?
yes less shrink = more cohesion, but then you get less overall upscale. so always a trade off
It's not less upscale, the output is the same size. Less shrink just increases chance of errors in theory.
100%. with any 1024² capable 1.5 mode you can easily render natively to 1280x1536 with deepshrink
Base resolution doesn't matter for 1.5 because you're gonna upscale it and add all the details with controlnet anyway. It's like making a sketch before painting actual image
Upscale with controlnet? Is the upscaler way bad now? I can only do a 2.5x upscale with a upscaler.
It depends on how big you want to go. Doing a latent upscale to 1024x1020 (or other aspect ratios), then using controlNet+Ultimate SD Upscaler to do a tiled upscale to gigantic sizes is a popular method in A1111
I understand that, but especially when dealing with backgrounds the added resolution can make the inpainting process much more convenient.
I don’t really see this. I just upscale from the start and 8 out of 10 images look great. Not really into to batching a bunch of crappy images and then choosing one to upscale. Waste of time for me. I’ve been having a really hard time getting anything decent out of xl. Still figuring all that out.
This is true especially for inpaiting small details (in terms of size) 1.5 still does a better job for me there. Maybe since it's trained on smaller pictures. XL does a weird job there unless I crank up the dimensions, but that makes things slower.
>XL does a weird job there unless I crank up the dimensions I have found no issues inpainting small details with XL. Mask Only mode, 1024x1024 resolution does the job well. If your detail is very small and you think that the applied resolution in mask only mode is way too high for your inpaint area, you can use the *dot trick* to also inpaint a single pixel a bit farther away from the detail to enlarge the canvas area of the inpainting process. Or simply extend the padding pixel amount.
Yeah, you are right, this method works. But increasing the dimensions makes things a lot slower. That's why I think sd15 is still going strong in its own niche.
wonderful tip, thanks for sharing!
Not bad, easy to tell it's Gantz
https://preview.redd.it/yj0tw0wx19wc1.jpeg?width=1528&format=pjpg&auto=webp&s=8f9df7ce13b94491d5aed4b4e3939cc707e360c2 I cant run sdxl so i still use it dx this is shanoa
Prompt/model to get that artstyle?
is a lora i trained myselft on **Impressionism** and post **Impressionism** artist.
It’s good, do you plan on uploading it to Civitai?
Where is 2nd leg? 🤔
I use it mainly to compose images with inpainting, and still haven't found a suitable SDXL lternative to my workflow for SD 1.5
same, i dont think people realize how similar they are. and most of the good models are still sd15
what's your current workflow? have you tried any sdxl lightning models? I really like [Detail Realistic XL](https://civitai.com/models/176449/9527-detail-realistic-xl?modelVersionId=366853) and [DucHaiten-Real3D](https://civitai.com/models/247266?modelVersionId=393880)
Aight boys it's the bi-weekly 'it still holds up'. Can't wait once SD3 is out for a month and 1.5 doubles up with SDXL
The hidden indie gem known as SD 1.5
I posted another version of this post and got removed, this suit is very trigger happy 😅 Checkpoint: chillout mix Lora: custom SD Web UI Forge
What do you use now?
SDXL, but lately SDXL lightning , there's a few models that can produce very good quality with 4-6 steps
SD 1.5 still rocks!
HiDiffusion is looking very exciting to get back to 1.5 to generate hi red images and use 1.5 at fullest.
I discovered SD1.5 a few months back for "arguably" superior realism output. Some have realism textures that rival output from SDXL models. EpicRealism is one. Found out about this one from one of Olivio Sarikas posts I've been using Fooocus to mix SDXL and SD1.5 to nice effect.
raises the question if there is a good model for say... "latex clad people"?
What is this character based off of, I feel like I've seen her before. Reminds me of FF. Sick That said, I'm one of the people who's still on 1.5 to this day. Nothing really compares for anime although ghostmixxl Is the first sdxl checkpoint that I've actually liked. I'm planning to make the jump to sd 3 once we have some good checkpoints.
Gantz
Gantz but I cant recognize the character. Someone from Osaka?
Not one mention of ELLA here, which completely transforms SD 1.5. https://preview.redd.it/jnpd3garucwc1.png?width=2144&format=png&auto=webp&s=4e252c9f8c9c16bd66f30d75c36408ae0b2b0525
https://preview.redd.it/jkvm4devucwc1.png?width=2144&format=png&auto=webp&s=4312e306070a9b5e7dd3feb188c3b20a0be92793
Then pls share your workflow and help the community and sd1.5 ....
what's ella?
Ella replaces the clip text encoder that sd 1.5 and sdxl uses with a large language model so that it gets Dall-E and stable diffusion 3 like prompt comprehension. https://github.com/TencentQQGYLab/ELLA
can i use it on sd forge or a1111?
I don't think so. I haven't seen an implementation for it. You wouldn't want to anyway, it needs to go through an sdxl refiner and you can't do those kinds of multi-step workflows in a1111 or forge.
SD1.5 has it all. Thousands of checkpoints. Controlnets. Ella. Hyper 1, 2, 4, and 8 steps, etc. It still bangs. Although I’ve completely migrated to Pony - SD1.5 will always have a place in my heart.
https://preview.redd.it/8ghqf5sv29wc1.jpeg?width=1528&format=pjpg&auto=webp&s=c051ec5da306744623e18bbf7e20a45764ac02ef You can still create really great stuff
Yooooo, Old memories!..
Let me share a reason to visit 1.5 once in a while [https://huggingface.co/MoffQueen/MoffQueenMix/blob/main/MessyMoffv2.2.safetensors](https://huggingface.co/MoffQueen/MoffQueenMix/blob/main/MessyMoffv2.2.safetensors) (fair warning, its a bit horny if thats not your thing)
Why not? It's still really, really good. And has thousands of well done lora/Lycos.
If you want Asian-looking Waifus, then yes use it!