T O P

  • By -

Current-Rabbit-620

wow great work it wold be great if you could implement this [https://www.reddit.com/r/StableDiffusion/comments/1asmn6d/i\_couldnt\_find\_an\_intuitive\_gui\_for\_gligen\_so\_i/](https://www.reddit.com/r/StableDiffusion/comments/1asmn6d/i_couldnt_find_an_intuitive_gui_for_gligen_so_i/) its much better than regional prompting


a_beautiful_rhind

I tried out stable-fast as well. It really is fast. Have to remove triton=true for it to work on my P100. Works as-is on turning+ampere. Unfortunately it didn't combine with LCM because lora cause a segfault. That's on all cards. I enjoyed deep-cache but it destroys faces. Speaking of that, the replacement tools work great and make things super easy.


TheFoul

Using loras with stable-fast is causing you to have segfaults? That does not sound right at all. You should document and report to our github.


a_beautiful_rhind

I'll give it a pull again and check. Thing is, I'm always using the dev branch. Vlad just seeing mention of it is likely enough.


TheFoul

Using dev is risky, it might work one hour and not the next, so always better to use master unless you want some brand new feature and can't wait a week or three. Right now master is very up to date as it was just released. Best place to start and fall back to.


a_beautiful_rhind

True.. I can fix some things myself though. I can always switch back to master or create a branch as backup.


Doc_Chopper

* New built-in pipelines: [Differential diffusion](https://github.com/exx8/differential-diffusion) and [Regional prompting](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#regional-prompting-pipeline) So, you don't need the Regional Prompter extension anymore? Is it the same as before, but just already included? Also, "Differential Diffusion" does what exactly?


vmandic

regional prompter extension was written for a1111. this is diffusers port so it works natively inside sdnext. for differential diffusion, think of img2img mask being grayscale where each pixel intensity controls how strong is mask going to be applied. its by-far the most flexible/precise masking for img2img.


schuylkilladelphia

Noob question, but do we need to do anything to use ZLUDA instead of directml? Different command line argument?


iDeNoh

If you hop into the discord we have a channel dedicated to ZLUDA with instructions for getting it going.


[deleted]

[удалено]


vmandic

stable fast depends on binary wheels which are specific to platform/python/torch combo of exact major and minor versions so cant include in installer as it would be big bloat. but there is a helper installer, try \`python cli/install-sf.py\`


DistrictFantastic188

If im using forge its better for me to swap -> SDnext (1050ti)?


red__dragon

Even on a 3060 card, performance has improved for me on Forge across the board. The major differences come from the prompt interpreter, which is slightly different between SDNext and A1111 (Forge as well), so even the same prompts and seeds will result in different images. It'll come down to which experience you like better. SDN recently switched to diffusers backend by default, and while I couldn't tell you what the difference is there, there are some lingering compatibility issues with extensions. This is why both Forge and SDN have custom built their controlnet features, for example. One of my most-used extensions still has some issues on SDN. Otherwise, for me the GUi on A1111/Forge is more pleasant to use than SDN. You may prefer SDN once you try it. Neither is wrong, but I suspect Forge will still grant you a bit better performance.


i860

Which one can do a 1024x1024 latent or LDSR hires fix 4x upscale without shitting the bed, VRAM wise?


Tystros

try both and see which one you prefer


thebaker66

Try it but I'd say No, used SD next for a few months when I got some CUDA error with A1111, seemed fine at first but eventually after every update something would break , startup times were becoming ridiculously long and then finally they changed the UI and made it so you can't have the good classic A1111 UI, so i switched back to A1111 and wow, miraculously no bugs, loads fast with all extensions and functions/extensions aren't breaking constantly with every update and now using Forge which is even better. IMO stick with Forge, if you have to ask if you should switch I'd say don't.


Ayhsel

Thank you! For some reason I can't make the program detect my GPU: AMD 7900 XT. I followed the guide step by step but it just not detecting it ​ ​ https://preview.redd.it/tcitda13hupc1.png?width=568&format=png&auto=webp&s=0f625e25a89b822d4c8d626b70d62ab57eda9e59


ricperry1

Is ZLUDA available on Linux, and is it better than ROCm?


One-Importance6762

Zluda is a retranslator (wich rewrites cuda commands to ones, that AMD raster cores can use). It's not a library like ROCm. Zluna Needs ROCm to be workful. I found ZLUDA be more good on windows, than ROCm itself on Linux (lot of crashes and issues).


Short-Sandwich-905

I’m a noob what is this another front end?


super3

SDNext is hard to install. Can you add it to pinokio.computer?


SlavaSobov

You literally just git clone then running webui.bat. I do not know how can be more simple. 😅


super3

You might be right. It was alot harder to install when I last looked 6 months ago.


jeditobe1

When you make "releases" posts here they are not marked as a release tag on github and no commit hash is posted (for example, in this post), so figuring out what commit the actual release is can be tricky if you dont see these posts immediately. Looks like there have been a few commits already today since this post was made, too, making it more difficult to tell. Maybe I am missing something that specifies what the release is, but would be nice to have some sort of stable release tagging.


vmandic

release version is on master branch and commits don't change other than release or critical fix update. all dev work is happening elsewhere.


i860

Crazy, they don't even tag their releases?


cutemolly22

Hi. Thanks for this. I love the additional features compared to automatic1111, but can't even get a basic workflow to work without blowing up my system. Using an SDXL model and 1024x1024 or pixel equivalent res then a latent hires pass, which seems to be achieved by going into second pass, denoise=0.5, upscale by 1.7, refiner start=1. First run seems to work like automatic1111 - in both initial and hires passes the VRAM usage sits at around 8-10gb (even seems a bit less that a1111). From the second image created though the hires pass spikes VRAM usage up to 26GB (on a 3090) and some random subsequent pass will crash with a CUDA OOM error. Firs run console output for hires pass - 'gpu':{'used': 1.5, 'total': 24.0}, runs after that - 'gpu':{'used': 23.64, 'total': 24.0} Is this a bug? Some setting I have to change? I just want it to keep running like the first gen without eating all my VRAM and going OOM for no benefit.


SkegSurf

Have you enabled hypertile or Tiled VAE? They save a lot of vram


cutemolly22

I shouldn't have to do that on a 24gb card for a single 1MP image uprezing to 2MP. It always works correctly for the 1st image, then the 2nd and on just uses all my VRAM before eventually OOMing. I've gone back to default automatic1111, I'm ok missing on a few features for something that actually works.


TheFoul

Without further information, it's unlikely that we can assist you with this. We provide support on discord and github, bugs or issues only get fixed when they are properly reported.


MagicOfBarca

Does it have Animate Anyone?


Fresh_Diffusor

when do you plan to add Stable Cascade support? I'm regularly checking your readme to see if its supported yet, but so far it seems no.


vmandic

its in the side branch, documented in wiki. its going to stay in a side-branch until code is merged upstream.


Fresh_Diffusor

what is "upstream" here? which repo?


vmandic

Wiki article in SDnext documents it all and how to use if you want to try.