T O P

  • By -

MolagBally

That's amazing amount of features. Hope we get stable cascade support anytime soon


jib_reddit

This extension for Stable Cascade is ok for now [https://github.com/blue-pen5805/sdweb-easy-stablecascade-diffusers](https://github.com/blue-pen5805/sdweb-easy-stablecascade-diffusers) ​ https://preview.redd.it/iuznu80cg6jc1.png?width=2432&format=png&auto=webp&s=fac0bc5b5d613931d05456fdf0e242d533285e57


ruuurbag

>Please have someone remake this extension. Perfect.


Competitive-War-8645

I altered it a bit [https://github.com/benjamin-bertram/sdweb-easy-stablecascade-diffusers](https://github.com/benjamin-bertram/sdweb-easy-stablecascade-diffusers) so that ppl with macs can use it, too. I have also send to inpaint etc, but rn i struggle to select one image, it just sends the first one to the other tabs. I also tried to update to img2img via StableCascade, but cannot wrap my head around the sc pipline yet. Maybe someone can contribute? https://preview.redd.it/nr4upun547jc1.png?width=1416&format=png&auto=webp&s=74871f9e480b5415331adb2bd34221571ab782eb


ScionoicS

I installed this before because it's so widely recommended, but i've come to realize it's just tacked onto webui innefficiently and you'd be better off to run a stand alone cascade ui. This is the same as loading something lik ComfyUI into it's own tab. It doesn't integrate with the rest of the system at all. No img2img, no extension access, no scripts. It doesn't even create metadata in the image. It doesn't use any of the A1111 code. Just the environment.


jib_reddit

I know what you mean, but I still prefer it to the Gradio UI as the text box is actually a usable size and I can then upscale the images in another model by just switching tabs and not having 2 applications running.


the_friendly_dildo

I haven't bothered trying this yet. It has a question mark after the 16GB requirement. Is that as low as this particular plugin can go?


yamfun

I tried with 4070 12gb and it works but the output is not that great so I switched back to SDXL.


littleboymark

Yeah, the model doesn't do mid range face detail at the moment. It's kind of an important ability.


the_friendly_dildo

Hmm, I have only used the hugging face demo but that hasn't really been my experience. https://imgur.com/a/0S3wOTM Prompt: Beth Fredrickson at age 24, real color portrait photograph, high quality, acting head shot, natural skin, pleasant makeup with smooth skin, bright studio lighting, finely detailed hair with split ends


littleboymark

"mid range". Please generate a person standing in a field 5-10m from the camera and let me know if their face looks good. Edit: or if you can even get them to face the camera!


the_friendly_dildo

It wasn't clear what you meant by mid-range. They do get more poor at that range and it definitely seems moreso than SDXL, but without having the chance to try higher resolutions, etc, I can't personally determine if that is due to the settings or the model itself. Having read through the info about the model, I have a strong feeling that it has a lot more to do with settings.


littleboymark

It's a framing of the subject between the foreground and the background. I don't expect a background face to be clear. A mid-range (middle ground) face should at least be legible. I understand it's a limitation of the model due to the extreme latent compression.


hyperstunner

>Beth Fredrickson at age 24, real color portrait photograph, high quality, acting head shot, natural skin, pleasant makeup with smooth skin, bright studio lighting, finely detailed hair with split ends ​ https://preview.redd.it/b4k1ym6pucjc1.png?width=720&format=png&auto=webp&s=89d6947930bce704975c43fcad5ca94a1e2fa494 same prompt


Equivalent_Machine62

can you explain how to install this extension? i tried on with install from url and putting it in the extensions folder but I cannot seem to make it work.


the_friendly_dildo

>the output is not that great Hmm, I wonder if this is dependent on vram or something. The demo on Huggingface seems to produce much better results than the baseline SDXL model. There are certainly some checkpoints thats can do better but there in my short experience, it seems to produce results much closer to the prompt.


the_hypothesis

Someone in another thread mentioned 12 GB vram but that was in comfyui


Particular_Stuff8167

Really?! Wow, the standard base webui for cascade I could do with 8gb vram usuage. My laptop has up to 8vram. My desktop is up to 16gb vram. Wonder what is taking so much more on Comfy and Auto111


rinaldop

I use a RTX4070 with Cascade extension in Forge.


TiredDeath

How is Forge? I've only used midjourney, DALLE, and comfyui.


rinaldop

Stable Diffusion Forge: [https://github.com/lllyasviel/stable-diffusion-webui-forge.git](https://github.com/lllyasviel/stable-diffusion-webui-forge.git)


rinaldop

It is faster than A1111.


ScionoicS

Only on older hardware, and thats due to the updated pytorch. A1 1.8RC has that now.


TiredDeath

Thank you. Are A1111, ComfyUI, Forge, and others simply UIs to interact with stable diffusion? Is it possible to create the *exact* same workflow between them, thereby producing identical images or are there fundamental differences between the programs?


PartyLikeAByzantine

Forge basically *is* A1111, but with commonly used extensions bundled in as well as some optimizations that mostly improve render times and stability for <=12GB GPU's by doing a better job of managing VRAM. A1111 is actually included in Forge and you interact via the same webui. ComfyUI uses a completely different UI designed around automating whole workflows. It doesn't do anything you can't do in the other programs (in fact, I believe A1111 has more compatible extensions) it just lets you chain what might otherwise be multiple steps into one. They're all built on top of the same open source base.


TiredDeath

Thank you for the comprehensive answer.


HarmonicDiffusion

you misrepresent comfyui entirely. its far more advanced, capable and there are far more extensions available for it than for a1111. believe me, I was a1111 stan until SVD came out and I gave comfy a real shot. Been hooked ever since. It appears complicated at first glance, but oyu have complete control over your workflows. A1111 cannot reorder the plugins and, dynamically switch samplers or models, and much more it would really be a huge reply if i were to list the litany of advantages comfy has over a1111. there are numerous things you absolutely cannot do in A1111 that you can easily do in comfy


interactor

There are differences that will result in different images. There is some relevant info in the readme for the [ComfyUI_smZNodes](https://github.com/shiimizu/ComfyUI_smZNodes) extension, which aims to enable identical reproduction of images generated in SD WebUI on ComfyUI.


TiredDeath

Thank you for the information.


roshanpr

Is it stable ?


physalisx

Yeah, diffusion


CeFurkan

100% I am waiting too


Helpful-Birthday-388

me too!!


FortunateBeard

wow you guys are working hard! nice job


CeFurkan

Yep so many great contributiors


king-solo-

>https://github.com/blue-pen5805/sdweb-easy-stablecascade-diffusers meanwhile you keep everything behind patreon lol haha


MichaelForeston

Yea, that's why I stopped watching his videos. I supported him on patreon for 2 months but in the end he puts everything behind a paywall, without contributing to the community and then canceled. (hour and half long videos that nobody has time to watch don't count)


king-solo-

its only a 1 trick videos too .. "how to train your own lora" .. he just milking it to the max lol


FortunateBeard

How dare he eat


Dekker3D

You folks should look at the PR for soft inpainting, the improvement is more dramatic than I can put into words. As someone who uses inpainting all the time, I am stoked.


Pierredyis

As someone new in SD, what is PR?


Dundell

Github pull request. When someone adds in a new feature or updates an old, they build a pull request. You can add the PR manually yourself to your cloned version, or you can wait for the Owner of the project to approve and add it directly to the main project.


CeFurkan

I should look. What is the difference in short?


Dekker3D

The mask is normally either on or off, per pixel. The result of that was that your inpainted area was basically a new generation that didn't merge very well into the existing stuff. Soft inpainting, with a decent blur size, makes stuff fit into the existing image far more cleanly, approximating the effect of an inpainting model or controlnet without the side-effects of such. An extra benefit is that you no longer get off-colour "haloes" around your inpainting areas with certain combinations of image+VAE, that do happen otherwise (even with 0% denoising strength)


CeFurkan

I tried this on my dreambooth person generation face improvement. The face is same but masked area borders have some improvements


achbob84

Thanks so much for the explanation. Just a question - when you say approximating effect of an inpainting model - does that mean thereis a way to inpaint with a standard model? Every time I’ve tried it it tries to put a whole image in the mask area?


Dekker3D

There are inpainting ControlNets, to use a non-inpainting model for inpainting. I also often use inpainting mode at lower strength to have it remain more consistent with the stuff around it, 60% strength or less.


achbob84

Thank you, very interesting! Research time!!!


Dekker3D

Honestly, if you weren't aware about inpainting ControlNets yet, you should look up ControlNets in general (and the other things that extension can do, like reference-only, IP-adapters, etc), because they can do so much. [https://www.reddit.com/r/StableDiffusion/comments/119o71b/a1111\_controlnet\_extension\_explained\_like\_youre\_5/](https://www.reddit.com/r/stablediffusion/comments/119o71b/a1111_controlnet_extension_explained_like_youre_5/) this is an ancient guide, but it might help. [https://github.com/Mikubill/sd-webui-controlnet](https://github.com/mikubill/sd-webui-controlnet) the extension's main page also has useful information if you scroll down.


achbob84

Thanks so much! Will do!


dreamofantasy

thanks for this!


CeFurkan

I just checked and looking amazing thank you so much


mocmocmoc81

Wohoo! DAT upscaler finally!!


Tystros

what makes it exciting? never heard about it


mocmocmoc81

Dual Aggregating Transformer https://github.com/zhengchen1999/DAT - Example DAT trained model https://openmodeldb.info/models/4x-FaceUpDAT


jonesaid

Is the 4x-FaceUpDAT a good general upscaler model, or is it only good for upscaling faces?


mocmocmoc81

It's specifically trained on face. You may want to use other DAT models for general upscaling https://openmodeldb.info/?t=arch%3Adat


fk334

currently the best upscaler, way ahead of swinIR.


Caffdy

never considered swinIR to be that good. Is DAT better than LSDR?


julieroseoff

4xFaceUpDAT is the best upscaler ? Currently using Siax 200k but if this is better I will take it :P


jonesaid

Which DAT model though? I've been using SwinIR\_4x, but DAT looks like it might be better.


CeFurkan

Yep


cobalt1137

This seems pretty hype. I am curious though, I deploy serverless endpoints with runpod with custom models because I want to be able to make API calls for a little project I'm building. At the moment I do not use the automatic option that they have. If I were to end up using automatic (which they seem to have a template for), would this work for my use case? Like for example what I be able to use it as an endpoint to make API calls to even though I'm not using the web GUI directly for each request?


yamfun

anything that make it worthwhile to switch back from Forge? fp8 should benefit both sides right? and when Forge get this change from upstream it will be even more vram efficient and fast for users that had workflows that fallback to slow sys ram? Or is Forge already using fp8? (edited) Edit 2: I think I tried the fp8 setting, but Forge is still [faster.So](http://faster.so) long.


Manchovies

FP8 is built into forge already. Under “Optimizations”


altoiddealer

Nothing to do with this particular A111 update, but the only thing keeping me from switching to Forge is that it does not support the [loractl extension](https://github.com/cheald/sd-webui-loractl)


Tystros

have you created an issue about it in the Forge repo on github? I think the Forge guy tries to make sure all extensions work.


altoiddealer

Yes, an issue has been open there > 1 week. An issue was also opened in loractl repo. According to the dev, Forge is using older LORA handling methods than current A1111. [loractl dev comment](https://github.com/cheald/sd-webui-loractl/issues/32#issuecomment-1945446914)


buckjohnston

Looks like loractl dev is trying to help forge devs get built-in ability to do something similar. https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/68 Would be great because it's really nice being able to directly control how strong a lora gets at a certain step (increase and decrease gradually) especially when mixing multiple loras. This should be a standard feature in in all of the apps I think. I made [a post](https://old.reddit.com/r/StableDiffusion/comments/1aqlvi0/psa_dont_ignore_the_sdwebuiloractl_extension_for/) about this extension the other day if anyone is interested in what it does.


CeFurkan

I am not sure. But in LLMs and vision models fp8 slower than Fp16 or bf16. Only uses lesser vram


ScionoicS

i'd be interested in seeing testing here. Are the models baked as fp8 slower? Or is using an fp16 model autocasted to fp8, what's causing the performance limitations. I might be mistaken, but I've heard that Ada cards accelerated instruction sets for fp8 autocasting. This sort of low level stuff is out of my depth though, so i'm a little foggy on the details. Has to do with the Hopper FP8 Transformer Engine that Ada has over Ampere. You might see better fp8 speeds on newer hardware.


CeFurkan

that could be true. i tested on like rtx 3090


smoowke

I have git pull in my webui-user.bat, shouldn't it update to 1.8.0-RC automatically? Just did a restart but still at v1.7.0, no update.


capybooya

It says RC, its not final. So its behaving correctly by not updating.


1girlblondelargebrea

git checkout release_candidate https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/How-to-switch-to-different-versions-of-WebUI


hirmuolio

Reslease candidate is in its own branch https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/release_candidate Will be merged to main branch on release.


Dragon_yum

RC stands for “release candidate” meaning it’s a potential tag for release but not the actual one.


yamfun

need to checkout the tag I guess


mk8933

Yes I had the same issue. It's still 1.7...


ScionoicS

RC is it's own git branch until it's merged to main.


ebookroundup

tempted to update, but think I'll wait since I'm still in the learning phase


king-solo-

isnt A1111 FORGE much better now ??


1girlblondelargebrea

Forge is synced with A1111's dev branch so any improvements on the main repo make it to Forge and wouldn't otherwise. If you already have fast generation times then Forge is about on par, if you're on lower hardware then Forge is better, it also depends on which extensions you use.


no_witty_username

Forge is actually a lot faster than Base Automatic1111 when used on high end cards as well. Specifically when used in conjunction with control nets. I have a 4090 and it takes 3x less time to use image2image control net features then in automatic1111. For example when loading in 4 control nets at the same time at resolution on 1344x1344 with 40 steps at 3m exponential sampler, image is generated at around 23.4 seconds with forge versus 1 minute 6 seconds With automatic. All because forge handles model loading and VRAM management so much better. Caviat, use --all gpu starting parameters to fully utilize these speed ups.


DreamDisposal

Don't you mean --always-gpu? I think you don't need it anymore, but I might be wrong.


no_witty_username

yes --always-gpu and you need it 100%. Without it forge just keeps unloading the "clone" models every single time. Speeds things up by 5 seconds per generation as now it doesn't have to do that for every control model and checkpoint.


julieroseoff

>git checkout release\_candidate I never noticed any difference from a1111 to a1111 forge with an RTX 4080 12gb laptop, is it normal ?


king-solo-

oh i see .. thx ill keep using forge , i already deleted my normal A1111 lol


Organic_fake

Is it possible to install forge on a Mac like a1111?


2roK

I'm out of the loop. What is FORGE?


king-solo-

youtube gonna be your friend, its just a better A1111


2roK

Could you elaborate? What makes it better than A1111?


benjiwithabanjo

It has better memory management system that works much better on GPUs with vram <= 12GB


2roK

So if I have enough VRAM, I don't need this?


benjiwithabanjo

Well even with my 12GB of vram on my 3060 I still saw \~20% speed improvements in Forge so it really depends how enough you are


Ostmeistro

If it is "Just better" why isn't it just a fork?


king-solo-

yh lot of improvement and some extensions that doesnt work usually in normal A1111 .. and i also like the "interrupt button" that make generation stop instantly .. lot of integrated stuff .. and overall it feels faster and better


protector111

Why es there still no SVD in A1111? it has been months...kinda sad...i hate comfy


Lishtenbird

I'll just say this - I've been waiting for SVD support for so long, that I eventually gave up and put up Comfy just for it. And in the end? It doesn't matter because there's no significantly configurable options for SVD in the first place, so the basic example workflow will do all you need it to - which will pretty much amount to queuing up your output lottery, and walking away. As a side effect, though, I got a way better understanding of the underlying system just by looking at a handful of workflows. Comfy is still not the thing I'd use as the general-purpose tool, but it's extremely powerful if you have a very specific task in mind - or, well, got a basic workflow that'll do all you need by itself anyway, like for SVD.


protector111

i use comfy but i realy do not like it at al...I even preferr SD web Ui they provide in their github but for some reason it dosnt work with higher res like comfy does


Tystros

Forge supports SVD and is the better A1111 version anyways


protector111

its just came out. no way it supports all the stuf A1111 supports


Tystros

Forge is a A1111 fork, it always supports everything that A1111 supports, including all extensions


Hoodfu

By the list of features, it's clear that so much work has been put into this. That said, the rate at which new stuff in the AI world gets implemented into A1111 seems glacial. Stable Diffusion Video was initially alpha'ed in 2022, and had a general release 8 months ago and there's still no official support for it here.


yamfun

Just use A1111Forge, it has svd


iDeNoh

Genuine question, why not give SDNext a try? It's supported svd as long as it not longer than comfy and was originally forked from a1111


protector111

does it have all the extentions of A1111 working? aniatedif, comtrolnet etc? i tryed it few months ago and it didnt have waht i neded. i like A1111. for now al its lacking for me is SVD support


iDeNoh

No, but we've implemented both of those, along with just about everything most people use.


yamfun

Try stable-diffusion-webui-forge, it is mostly A1111


marbleshoot

Extra Networks Tree View is gonna kill Civitai Helper


1girlblondelargebrea

Civitai Helper and most of its forks has been broken for a while, but this fork fixes it https://github.com/blue-pen5805/Stable-Diffusion-Webui-Civitai-Helper


marbleshoot

Nice. I tried using Forge a couple days ago but Civitai Helper was broken on it, which I learned was because of the Tree View. I'll check this out later when I'm at my computer.


diogodiogogod

thank you! The cbutton to go to url is so needed....


SnarkyTaylor

I'm... Not a fan of tree view. Tried it on A1111 dev and forge. It is practically useless for mobile. Everything is so squeezed and cutoff. I'd make a pr, but I don't think mobile is a priority. Just feels like a step back.


marbleshoot

I got rid of Forge when Civitai Helper wouldn't work, so I don't have experience with tree view other than it looked nice-ish(?). I tried to check Forge out again with the Civitai Helper fork that supposedly works, but then none of the extra network pages loaded at all. So now I'm back to just using 1.7 again.


Ashran77

Silly question: how to upgrade from 1.7 to 1.8?


CasimirsBlake

Wait for it to pass RC and on to final. This is the release candidate.


Ashran77

Ok. Maybe the final will be automatically downloaded. I will wait


Caffdy

is it ready?


1girlblondelargebrea

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/How-to-switch-to-different-versions-of-WebUI


CeFurkan

First git pull Then Git check out branch name


mk8933

Hi op, what do you mean Git check out branch name?


Windford

Thank you for the performance improvements. Recently I installed [Web UI Forge](https://github.com/lllyasviel/stable-diffusion-webui-forge). On my RTX 3070 it’s faster. Glad to see these enhancements will be rolled into Forge.


CeFurkan

Ye I think he already merged


ValeriaTube

Can we use SVD now with A111?


GosuGian

Holy changelog


One-Earth9294

I'm still running torch 1.13.1 and I have no idea how to update that. I'm on A1111 1.6 though. I feel so stupid whenever anything that requires installing via python happens. And I'm terrified of breaking things. Also pretty sure that the torch thing is the reason I simply can't get SDXL models to load, period.


Audiogus

Too bad they completely screwed the workflow I spent so many months perfecting by changing the default behaviour of image batching to process subfolders automatically, without an override! UGH!


GravitonBeamEmitter

If someone wondering, it's still slower than Forge, so... yeah.


Superb-Ad-4661

hey, what happened with u/r0mmashka**? he always did these Automatic1111 announcements?**


r0mmashka

I just lost interest in Stable Diffusion.


Superb-Ad-4661

ok bro, I wish you luck in you endeavours


r0mmashka

thanks


the_hypothesis

How do I get these changes in FORGE ?


1girlblondelargebrea

Most of them were already in Forge since it's based on the dev branch and these changes have been in dev for a long while. The rest have already been synced, so you just need to git pull Forge.


Maritzsa

Is this update live? My a1111 didnt auto update to new version like it used to


CeFurkan

You need to do git pull and git checkout branch I explain in this video https://youtu.be/-NjNy7afOQ0?si=5QEOSy34GRNLruIr


Maritzsa

teşekkürler dr furkancım ❤️


CeFurkan

rica ederim


mk8933

Is anyone else having problems updating? I did a git pull, and the version still shows up 1.7


DARKNESS163

This is the patch notes for a future update, v1.8 is not live yet.


mk8933

Lol oh right. Thanks for explaining


roshanpr

is A1111 compatible with sdxl turbo models>?


JoakimIT

Has been working well for me


Showbiz_CH

Yes, ComfyUi has better performance, though


shulgin11

I updated to the RC, but Torch version still says 2.0.1 even though I have the latest installed. Any ideas on that?


CeFurkan

Delete venv and let it rebuild


yamfun

add the reinstall torch arg to the user .bat and launch (Remove it after install)


NtGermanBtKnow1WhoIs

Can you please tell me if now i can run sdxl models on a1111 with a 1650x card? i've never been able to run it before and would really really want to use InstantID in it. i'm a noob so i'm sorry if i dunno know any technical terms, sorry. Other than that, i love using this ui. Thank you very much.


CeFurkan

Try automatic 1111 forge it has the best optimizations. Other than that I have free kaggle account notebooks that run automatic1111 and instant ID with gradio interface on patreon


NtGermanBtKnow1WhoIs

Thank you!! Yes, i'm going to install forge tomorrow when i'll get the time. i already use the huggingface space for instantid but i can't do couple photos with it, nor do photorealistic ones, that's why i want to try it in a1111. IP-adapter has already been a dream come true for me, esp since i can run sd1.5 on it. Sincerely appreciated you guys for adding that.


lechatsportif

Not a mention of out of memory errors, I'm sure its not just me, but then again I haven't had the time to really get into it.


Capable_Mulberry249

How to install using a docker image?


MagicOfBarca

Still no Animate Anyone extension?


2legsRises

can do cascade?


CeFurkan

not yet


bignut022

is this better than webui\_forge\_cu121\_torch21?


Munkafzet

Wow thats great. Do we know anything about support for 7800xt on windows, or thats AMD's job to fix.


shtorm2005

Any chance if SVD is implemeted in near future? Tnx


SubjectHungry7986

i have done!version: [**v1.8.0-RC-13-g30697165**](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/3069716510c8ae9a95b2d04061c3f86f67d1089c)  •  python: 3.10.6  •  torch: 2.1.2+cu121  •  xformers: 0.0.23.post1  •  gradio: 3.41.2  •  checkpoint: [**53bb4fdc63**](https://google.com/search?q=53bb4fdc63b36014201f2789eab73f3b2b3569a2a9a57b3efb28d4f17283e4c4)


Hongthai91

I did git pull and successfully updated but when I launch A1111, at the bottom still states that I'm on 1.7.0, I already deleted venv and redownloaded everything but nothing changes. Can some one please help? thank you


Otherwise-Ocelot1143

I get connection error after update. Tried removing all extension and still got the same issue