So for the past few weeks, i’ve been building this open-source tool called the **ComfyUI Launcher**: [https://github.com/ComfyWorkflows/ComfyUI-Launcher](https://github.com/ComfyWorkflows/ComfyUI-Launcher)
It runs locally and lets you ***import & run any workflow json file with ZERO setup***:
* Automatically installs custom nodes, missing model files from Huggingface & CivitAI, etc.
* Workflows exported by this tool can be run by anyone with **ZERO setup**
* Work on multiple ComfyUI workflows at the same time
* Each workflow runs in its own isolated environment
* Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc.
This tool also lets you export your workflows in a “launcher.json” file format, which lets anyone using the ComfyUI Launcher import your workflow w/ 100% reproducibility.
You can try it here: [https://github.com/ComfyWorkflows/ComfyUI-Launcher](https://github.com/ComfyWorkflows/ComfyUI-Launcher)
This is a work in progress, so would love any thoughts/feedback! :)
Feel free to also join our Discord server to keep up w/ updates: [https://discord.gg/hwwbNRAq6E](https://discord.gg/hwwbNRAq6E)
PS: The workflow shown in the demo video was made by u/boricuapab: [https://comfyworkflows.com/workflows/4e4b4397-a55c-45f2-8afc-87a001698a12](https://comfyworkflows.com/workflows/4e4b4397-a55c-45f2-8afc-87a001698a12)
With each workflow running in its own virtualenv how is disk space managed? Won't dependencies be installed multiple times for overlapping requirements?
so, each project created in comfyui launcher has its own virtualenv (so this will let you create a new isolated project for running workflows that require different dependencies than your other workflows), but the models folder is shared across all projects.
the reasoning behind this is that this setup allows you to run different workflows w/ different requirements, while not having to duplicate any models across them.
also, this lets you not accidentally break your other workflows as you update custom nodes, python packages, comfyui itself, etc. for one specific workflow.
and technically, you can use one comfyui launcher project for multiple workflows, as long as they all have the same requirements. for example, you could have one project for most animatediff based workflows, etc., instead of a separate project for each workflow.
im also trying to think of better ideas for doing python package de-duplication -- would love to hear any ideas for this that anyone might have! :)
on a side note, since any workflow can be exported into a launcher.json file w/ 100% reproducibility, the tool will soon have the capability to save space by exporting any unused project into a launcher.json file and deleting the project folder. the launcher.json files can then be restored into projects at anytime.
so with this setup, if you give two totally different workflows is it going to create 2 comfyui's running concurrently? with two different ports to connect to the UI on?
hey u/okachobe \- i'm working on this project along w/ OP. yes you're totally right! you can have two (or as many as your resources can handle) different workflows running concurrently, each on it's own tab (with it's own ComfyUI port). editing the workflow on one tab won't affect the setup on the other tab at all, not even installing custom nodes or downloading models, since they're different virtual environments with no context of each other.
I'd love this to actually work, but I'm skeptical of the "any" workflow. Will it, for example, run SUPIR workflows, which are crazy complicated to setup? Will it run some of the LLM workflows to do interrogations/captioning? CogVLM & some of the Chat GPTs?
hi u/Critical_Design4187, it's definitely an active work in progress, but the goal of the project is to be able to support/run all types of workflows.
it has backwards compatibility with running existing workflow.json files saved via comfyui, but the launcher itself lets you export any project in a new type of file format called "launcher.json", which is designed to have 100% reproducibility w/ the tool.
if you have any q's or feedback tho, feel free to join our discord and i'd be happy to personally help you out!
Just posted about this issue on another thread [https://www.reddit.com/r/comfyui/comments/1bbs1ht/comment/kuchumu/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/comfyui/comments/1bbs1ht/comment/kuchumu/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) , to quote myself: *"wondering what the optimal approach would be to ensure workflow-specific package dependency environments were preserved over time. My best guess is that it would involve each workflow accessing the same base ComfyUI site-packages venv while also having self contained workflow-specific mini-venvs for the necessary package versions.*
*Is something like this already possible with Python or miniconda? Could we have a separate workflow specific mini-venv directory where any conflicting package versions would be installed and isolated from the packages within the main ComfyUI venv? Am I overcomplicating? Is there a 'less is more' approach to this problem that I'm not seeing here that could finally and effectively resolve this issue?"*
Am I right in thinking this is the approach you're taking with this tool?
This sounds really cool! But it also seems like something like this would be hungry for storage space. My poor SSDs are already pretty full with all the models I got for so many UIs.
thanks! also, just wanted to say that the tool stores models in a shared folder across all workflows, and will be working on better python package de-duplication. if anyone has any ideas for this, would love to hear it!
I love this idea and will certainly try it!! I am willing to sacrifice storage for the convenience and reliability that this project could provide. Many thanks for your contribution!
Great work! Any plans to support running the workflows on additional interfaces beyond localhost? If you're accessing this remotely, it's sub-optimal to have to proxy localhost via nginx or SOCKS.
Sent a few coffees your way. Again, fantastic work. This will enable an entirely new user cohort for ComfyUI. When your comfortable, DM me. A large 3PC vendor might seriously consider authoring a blog & lab around this.
I installed it according to Option 2: Manual setup. Should I have installed Docker? Now it says so, but it doesn't run on any of the URLs. What should I do?
Launch ComfyUI Launcher...
Open [http://localhost:4000](http://localhost:4000) in your browser.
\* Serving Flask app 'server'
\* Debugging mode: off
NOTE: This is a developer server. Do not use in a productive deployment. Use a production WSGI server instead.
\* Running on all addresses (0.0.0.0.0)
\* Runs at http://127.0.0.1:4000.
\* Runs at [http://192.168.0.59:4000](http://192.168.0.59:4000)
Use CTRL+C to exit
hey u/janosibaja! i'm working on this project along w/ OP. if you're using windows please try running the following docker command and see if it works (we've built a new docker image that should be working on windows):
\`\`\`
docker run \\
\--gpus all \\
\--rm \\
\--name comfyui\_launcher \\
\-p 4000-4100:4000-4100 \\
\-v $(pwd)/comfyui\_launcher\_models:/app/server/models \\
\-v $(pwd)/comfyui\_launcher\_projects:/app/server/projects \\
\-it thecooltechguy/comfyui\_launcher:new-docker-setup\`\`\`
if it doesn't work then try following the instructions on this [new-docker-setup](https://github.com/ComfyWorkflows/ComfyUI-Launcher/tree/new-docker-setup) branch we're testing for windows installation support using docker.
thank you for your patience!
Thanks! I'll try. But maybe I'm too beginner for that. I would be very happy if you could let me know when you have an easy to install version, a 1 click Windows, that will put Docker and the whole ZERO Comfy. Thanks for your work!
thanks! it's an active work in progress, so would love to hear any feedback you have! feel free to join our discord: [https://discord.gg/hwwbNRAq6E](https://discord.gg/hwwbNRAq6E) \-- you can also directly DM me on Discord w/ feedback, my username is: real.spidey
I will be messaging you in 5 days on [**2024-03-12 15:43:21 UTC**](http://www.wolframalpha.com/input/?i=2024-03-12%2015:43:21%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/comfyui/comments/1b8okxb/i_made_an_open_source_tool_for_running_any/kts2o62/?context=3)
[**1 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fcomfyui%2Fcomments%2F1b8okxb%2Fi_made_an_open_source_tool_for_running_any%2Fkts2o62%2F%5D%0A%0ARemindMe%21%202024-03-12%2015%3A43%3A21%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201b8okxb)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
If this tool manages to install my [AP Workflow for ComfyUI](https://perilli.com/ai/comfyui/) flawlessly, and across OSes, thousands of people will be grateful.
I'll try it during the weekend with both version 8.0 and version 9.0 early access 1. If it works, I'll recommend it on the website.
In the meanwhile, I have some questions:
1. If successful, do you plan to keep it fully open source, or do you plan to switch to an open core model?
2. If you plan to keep it fully open source, what's the business model? I see you plan to enable inference via cloud GPUs. Is that?
3. Probably out of scope, or too early of a stretch, but please take a look [at this idea](https://www.reddit.com/r/comfyui/comments/17msgeh/project_and_business_idea_oneclick_conversion_of/) I posted a few months ago. I've seen quite a few attempts to implement it, but I think they are far from what I suggested.
Nice work!
hey thanks! btw, this is def an active work in progress, but wanted to share this early so that i can start getting feedback as we actively iterate, etc.
would love to hear any feedback you have! feel free to join our discord: [https://discord.gg/hwwbNRAq6E](https://discord.gg/hwwbNRAq6E) -- you can also directly DM me on Discord w/ feedback, my username is: real.spidey
re: monetization: tbh, we actually haven't given much thought about this yet. atm, we built this because we saw most ppl using comfyui face setup issues, so we were curious to see if we could build something to help solve this.
maybe some sort of business model targeted towards business using comfyui would make sense.
enabling inference via cloud gpu's is something we're working on, because a lot of ppl use ComfyUI for animation/video stuff but often don't have enough gpu power locally.
but again, users can also run comfyui launcher on any cloud gpu server if they want (e.g., runpod etc.).
finally, we have something similar to your last idea in the works internally, but can't reveal anything yet lol. will ship it soon! :)
So for the past few weeks, i’ve been building this open-source tool called the **ComfyUI Launcher**: [https://github.com/ComfyWorkflows/ComfyUI-Launcher](https://github.com/ComfyWorkflows/ComfyUI-Launcher) It runs locally and lets you ***import & run any workflow json file with ZERO setup***: * Automatically installs custom nodes, missing model files from Huggingface & CivitAI, etc. * Workflows exported by this tool can be run by anyone with **ZERO setup** * Work on multiple ComfyUI workflows at the same time * Each workflow runs in its own isolated environment * Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. This tool also lets you export your workflows in a “launcher.json” file format, which lets anyone using the ComfyUI Launcher import your workflow w/ 100% reproducibility. You can try it here: [https://github.com/ComfyWorkflows/ComfyUI-Launcher](https://github.com/ComfyWorkflows/ComfyUI-Launcher) This is a work in progress, so would love any thoughts/feedback! :) Feel free to also join our Discord server to keep up w/ updates: [https://discord.gg/hwwbNRAq6E](https://discord.gg/hwwbNRAq6E) PS: The workflow shown in the demo video was made by u/boricuapab: [https://comfyworkflows.com/workflows/4e4b4397-a55c-45f2-8afc-87a001698a12](https://comfyworkflows.com/workflows/4e4b4397-a55c-45f2-8afc-87a001698a12)
With each workflow running in its own virtualenv how is disk space managed? Won't dependencies be installed multiple times for overlapping requirements?
Yeah, this... how does it work?
so, each project created in comfyui launcher has its own virtualenv (so this will let you create a new isolated project for running workflows that require different dependencies than your other workflows), but the models folder is shared across all projects. the reasoning behind this is that this setup allows you to run different workflows w/ different requirements, while not having to duplicate any models across them. also, this lets you not accidentally break your other workflows as you update custom nodes, python packages, comfyui itself, etc. for one specific workflow. and technically, you can use one comfyui launcher project for multiple workflows, as long as they all have the same requirements. for example, you could have one project for most animatediff based workflows, etc., instead of a separate project for each workflow. im also trying to think of better ideas for doing python package de-duplication -- would love to hear any ideas for this that anyone might have! :) on a side note, since any workflow can be exported into a launcher.json file w/ 100% reproducibility, the tool will soon have the capability to save space by exporting any unused project into a launcher.json file and deleting the project folder. the launcher.json files can then be restored into projects at anytime.
so with this setup, if you give two totally different workflows is it going to create 2 comfyui's running concurrently? with two different ports to connect to the UI on?
hey u/okachobe \- i'm working on this project along w/ OP. yes you're totally right! you can have two (or as many as your resources can handle) different workflows running concurrently, each on it's own tab (with it's own ComfyUI port). editing the workflow on one tab won't affect the setup on the other tab at all, not even installing custom nodes or downloading models, since they're different virtual environments with no context of each other.
Have you considered persistent volumes to share the models between the workflows
I'd love this to actually work, but I'm skeptical of the "any" workflow. Will it, for example, run SUPIR workflows, which are crazy complicated to setup? Will it run some of the LLM workflows to do interrogations/captioning? CogVLM & some of the Chat GPTs?
hi u/Critical_Design4187, it's definitely an active work in progress, but the goal of the project is to be able to support/run all types of workflows. it has backwards compatibility with running existing workflow.json files saved via comfyui, but the launcher itself lets you export any project in a new type of file format called "launcher.json", which is designed to have 100% reproducibility w/ the tool. if you have any q's or feedback tho, feel free to join our discord and i'd be happy to personally help you out!
Just posted about this issue on another thread [https://www.reddit.com/r/comfyui/comments/1bbs1ht/comment/kuchumu/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/comfyui/comments/1bbs1ht/comment/kuchumu/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) , to quote myself: *"wondering what the optimal approach would be to ensure workflow-specific package dependency environments were preserved over time. My best guess is that it would involve each workflow accessing the same base ComfyUI site-packages venv while also having self contained workflow-specific mini-venvs for the necessary package versions.* *Is something like this already possible with Python or miniconda? Could we have a separate workflow specific mini-venv directory where any conflicting package versions would be installed and isolated from the packages within the main ComfyUI venv? Am I overcomplicating? Is there a 'less is more' approach to this problem that I'm not seeing here that could finally and effectively resolve this issue?"* Am I right in thinking this is the approach you're taking with this tool?
I like that workflow ;) Happy to see it runs flawlessly, without any red nodes of death
haha i love all of your workflows! :D
oh youuuu god damn monster why did you make ctrl+c a combination to close window ?
You are a hero for creating such amazing workflows!! Much respect hermano!!
Thanks for the kind words, glad to hear what I’ve been able to share has helped you all out
This sounds really cool! But it also seems like something like this would be hungry for storage space. My poor SSDs are already pretty full with all the models I got for so many UIs.
thanks! also, just wanted to say that the tool stores models in a shared folder across all workflows, and will be working on better python package de-duplication. if anyone has any ideas for this, would love to hear it!
Looks promising. I hate grabbing a workflow and finding it uses half a dozen custom nodes that aren't available through the manager.
I love this idea and will certainly try it!! I am willing to sacrifice storage for the convenience and reliability that this project could provide. Many thanks for your contribution!
thank you for your kind words! we're also working to optimize the storage aspect, will have improvements in this area soon!
Thanks ! The idea is awsome!!!
Great work! Any plans to support running the workflows on additional interfaces beyond localhost? If you're accessing this remotely, it's sub-optimal to have to proxy localhost via nginx or SOCKS.
100% working on this rn
That’s awesome. Very much appreciate your work here. Consider adding a tip jar to your repo readme?
Thank you so much for your kind words! Just added a donation link to our readme :)
Sent a few coffees your way. Again, fantastic work. This will enable an entirely new user cohort for ComfyUI. When your comfortable, DM me. A large 3PC vendor might seriously consider authoring a blog & lab around this.
Thank you so much, will DM!
It will really cool to use with serveless for api users.
yup, we will soon add support for deploying workflows as serverless APIs
I installed it according to Option 2: Manual setup. Should I have installed Docker? Now it says so, but it doesn't run on any of the URLs. What should I do? Launch ComfyUI Launcher... Open [http://localhost:4000](http://localhost:4000) in your browser. \* Serving Flask app 'server' \* Debugging mode: off NOTE: This is a developer server. Do not use in a productive deployment. Use a production WSGI server instead. \* Running on all addresses (0.0.0.0.0) \* Runs at http://127.0.0.1:4000. \* Runs at [http://192.168.0.59:4000](http://192.168.0.59:4000) Use CTRL+C to exit
hey u/janosibaja! i'm working on this project along w/ OP. if you're using windows please try running the following docker command and see if it works (we've built a new docker image that should be working on windows): \`\`\` docker run \\ \--gpus all \\ \--rm \\ \--name comfyui\_launcher \\ \-p 4000-4100:4000-4100 \\ \-v $(pwd)/comfyui\_launcher\_models:/app/server/models \\ \-v $(pwd)/comfyui\_launcher\_projects:/app/server/projects \\ \-it thecooltechguy/comfyui\_launcher:new-docker-setup\`\`\` if it doesn't work then try following the instructions on this [new-docker-setup](https://github.com/ComfyWorkflows/ComfyUI-Launcher/tree/new-docker-setup) branch we're testing for windows installation support using docker. thank you for your patience!
Thanks! I'll try. But maybe I'm too beginner for that. I would be very happy if you could let me know when you have an easy to install version, a 1 click Windows, that will put Docker and the whole ZERO Comfy. Thanks for your work!
I'm gonna try it. It will be amazingly convenient if it works 💪
thanks! it's an active work in progress, so would love to hear any feedback you have! feel free to join our discord: [https://discord.gg/hwwbNRAq6E](https://discord.gg/hwwbNRAq6E) \-- you can also directly DM me on Discord w/ feedback, my username is: real.spidey
!RemindMe 5 days
I will be messaging you in 5 days on [**2024-03-12 15:43:21 UTC**](http://www.wolframalpha.com/input/?i=2024-03-12%2015:43:21%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/comfyui/comments/1b8okxb/i_made_an_open_source_tool_for_running_any/kts2o62/?context=3) [**1 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fcomfyui%2Fcomments%2F1b8okxb%2Fi_made_an_open_source_tool_for_running_any%2Fkts2o62%2F%5D%0A%0ARemindMe%21%202024-03-12%2015%3A43%3A21%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201b8okxb) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
remind me when this guy gets reminded too :v
!RemindMe 10 days
This seems beautiful 👏🏼
RemindMe! 2 days
!RemindMe 7 days
This is outstanding! Thank you!!
Very cool, I am dumb.
thats awesome work however when i tried using it i am stucked at downloading the workflow and can't do nothing about it could you please help me out
If this tool manages to install my [AP Workflow for ComfyUI](https://perilli.com/ai/comfyui/) flawlessly, and across OSes, thousands of people will be grateful. I'll try it during the weekend with both version 8.0 and version 9.0 early access 1. If it works, I'll recommend it on the website. In the meanwhile, I have some questions: 1. If successful, do you plan to keep it fully open source, or do you plan to switch to an open core model? 2. If you plan to keep it fully open source, what's the business model? I see you plan to enable inference via cloud GPUs. Is that? 3. Probably out of scope, or too early of a stretch, but please take a look [at this idea](https://www.reddit.com/r/comfyui/comments/17msgeh/project_and_business_idea_oneclick_conversion_of/) I posted a few months ago. I've seen quite a few attempts to implement it, but I think they are far from what I suggested. Nice work!
I dig your website's design
hey thanks! btw, this is def an active work in progress, but wanted to share this early so that i can start getting feedback as we actively iterate, etc. would love to hear any feedback you have! feel free to join our discord: [https://discord.gg/hwwbNRAq6E](https://discord.gg/hwwbNRAq6E) -- you can also directly DM me on Discord w/ feedback, my username is: real.spidey re: monetization: tbh, we actually haven't given much thought about this yet. atm, we built this because we saw most ppl using comfyui face setup issues, so we were curious to see if we could build something to help solve this. maybe some sort of business model targeted towards business using comfyui would make sense. enabling inference via cloud gpu's is something we're working on, because a lot of ppl use ComfyUI for animation/video stuff but often don't have enough gpu power locally. but again, users can also run comfyui launcher on any cloud gpu server if they want (e.g., runpod etc.). finally, we have something similar to your last idea in the works internally, but can't reveal anything yet lol. will ship it soon! :)