I can tell you that ComfyUI renders 1024x1024 in SDXL at faster speeds than A1111 does with hiresfix 2x (for SD 1. If you're using ComfyUI you can right click on a Load Image node and select "Open in MaskEditor" to draw an inpanting mask. T2I-Adapter is an efficient plug-and-play model that provides extra guidance to pre-trained text-to-image models while freezing the original large text-to-image models. I recommend you do not use the same text encoders as 1. Adds 'Reload Node (ttN)' to the node right-click context menu. SDXL VAE. i miss my fast 1. safetensors”. Those are two different models. 下载Comfy UI SDXL Node脚本. 9-usage This repo is a tutorial intended to help beginners use the new released model, stable-diffusion-xl-0. 9 and Stable Diffusion 1. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. refiner_output_01030_. It is actually (in my opinion) the best working pixel art Lora you can get for free! Just some faces still have issues. source_folder_path = '/content/ComfyUI/output' # Replace with the actual path to the folder in th e runtime environment destination_folder_path = f '/content/drive/MyDrive/ {output_folder_name} ' # Replace with the desired destination path in you r Google Drive # Create the destination folder in Google Drive if it doesn't existI wonder if it would be possible to train an unconditional refiner that works on RGB images directly instead of latent images. For example, see this: SDXL Base + SD 1. Working amazing. With SDXL I often have most accurate results with ancestral samplers. 9. With resolution 1080x720 and specific samplers/schedulers, I managed to get a good balanced and a good image quality, first image with base model not very high. Models and. The issue with the refiner is simply stabilities openclip model. 0_comfyui_colab のノートブックが開きます。. With Vlad releasing hopefully tomorrow, I'll just wait on the SD. 20:57 How to use LoRAs with SDXL. Stable diffusion-TensorRT安装教程,看完省一张显卡钱!,fooocus完全体2. ComfyUI is a powerful modular graphic interface for Stable Diffusion models that allows you to create complex workflows using nodes. ( I am unable to upload the full-sized image. Use at your own risk. My bet is, that both models beeing loaded at the same time on 8GB VRAM causes this problem. . For me its just very inconsistent. For upscaling your images: some workflows don't include them, other workflows require them. Having previously covered how to use SDXL with StableDiffusionWebUI and ComfyUI, let’s now explore SDXL 1. 1 and 0. 9 Tutorial (better than Midjourney AI)Stability AI recently released SDXL 0. With usable demo interfaces for ComfyUI to use the models (see below)! After test, it is also useful on SDXL-1. For example: 896x1152 or 1536x640 are good resolutions. The test was done in ComfyUI with a fairly simple workflow to not overcomplicate things. u/EntrypointjipDiscover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool. everything works great except for LCM + AnimateDiff Loader. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. These are examples demonstrating how to do img2img. i've switched from a1111 to comfyui for sdxl for a 1024x1024 base + refiner takes around 2m. Stable Diffusion XL comes with a Base model / checkpoint plus a Refiner. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try. I also have a 3070, the base model generation is always at about 1-1. AnimateDiff for ComfyUI. . Closed BitPhinix opened this issue Jul 14, 2023 · 3. 6B parameter refiner model, making it one of the largest open image generators today. Mostly it is corrupted if your non-refiner works fine. Installation. 5 and 2. Make the following changes: In the Stable Diffusion checkpoint dropdown, select the refiner sd_xl_refiner_1. So I have optimized the ui for SDXL by removing the refiner model. Start with something simple but that will be obvious that it’s working. 5 of the report on SDXLSDXL works "fine" with just the base model, taking around 2m30s to create a 1024x1024 image (SD1. He puts out marvelous Comfyui stuff but with a paid Patreon and Youtube plan. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. And I'm running the dev branch with the latest updates. bat to update and or install all of you needed dependencies. set COMMANDLINE_ARGS=--medvram --no-half-vae --opt-sdp-attention. 0. I describe my idea in one of the post and Apprehensive_Sky892 showed me it's arleady working in ComfyUI. 5-38 secs SDXL 1. Some of the added features include: -. 9 - How to use SDXL 0. I feel like we are at the bottom of a big hill with Comfy, and the workflows will continue to rapidly evolve. This was the base for my. I’ve created these images using ComfyUI. . Yet another week and new tools have come out so one must play and experiment with them. 5B parameter base model and a 6. Always use the latest version of the workflow json file with the latest version of the custom nodes!Yes it’s normal, don’t use refiner with Lora. Reply reply litekite_ For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features. 0 with refiner. But suddenly the SDXL model got leaked, so no more sleep. 0_0. Model Description: This is a model that can be used to generate and modify images based on text prompts. 左上角的 Prompt Group 內有 Prompt 及 Negative Prompt 是 String Node,再分別連到 Base 及 Refiner 的 Sampler。 左邊中間的 Image Size 就是用來設定圖片大小, 1024 x 1024 就是對了。 左下角的 Checkpoint 分別是 SDXL base, SDXL Refiner 及 Vae。 This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. Search for “post processing” and you will find these custom nodes, click on Install and when prompted, close the browser and restart ComfyUI. 0 base model. This aligns the node (s) to the set ComfyUI grid spacing size and move the node in the direction of the arrow key by the grid spacing value. 9 the latest Stable. 0. latent to avoid this) Do the opposite and disable the nodes for the base model and enable the refiner model nodes. I recommend trying to keep the same fractional relationship, so 13/7 should keep it good. It is totally ready for use with SDXL base and refiner built into txt2img. Img2Img batch. This is a simple preset for using the SDXL base with the SDXL refiner model and correct SDXL text encoders. Download both from CivitAI and move them to your ComfyUI/Models/Checkpoints folder. Stability. The the base model seem to be tuned to start from nothing, then to get an image. 5 refiner tutorials into your ComfyUI browser and the workflow is loaded. . There are two ways to use the refiner: ;. But it separates LORA to another workflow (and it's not based on SDXL either). Well, SDXL has a refiner, I'm sure you're asking right about now - how do we get that implemented? Although SDXL works fine without the refiner (as demonstrated above) you really do need to use the refiner model to get the full use. By default, AP Workflow 6. With Tiled Vae (im using the one that comes with multidiffusion-upscaler extension) on, you should be able to generate 1920x1080, with Base model, both in txt2img and img2img. 9: The refiner has been trained to denoise small noise levels of high quality data and as such is not expected to work as a text-to-image model;. 5 min read. 0 ComfyUI Workflow With Nodes Use Of SDXL Base & Refiner ModelIn this tutorial, join me as we dive into the fascinating worl. 20:43 How to use SDXL refiner as the base model. 9 fine, but when I try to add in the stable-diffusion-xl-refiner-0. 1 for the refiner. 0. This is pretty new so there might be better ways to do this, however this works well and we can stack Lora and Lycoris easily, then generate our text prompt at 1024x1024 and allow remacri to double. Thanks for your work, i'm well into A1111 but new to ComfyUI, is there any chance you will create an img2img workflow?This notebook is open with private outputs. x, SD2. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. 启动Comfy UI. I am using SDXL + refiner with a 3070 8go. 3分ほどで のような Cloudflareのリンク が現れ、モデルとVAEのダウンロードが終了し. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try to add. in subpack_nodes. The refiner is entirely optional and could be used equally well to refine images from sources other than the SDXL base model. そこで、GPUを設定して、セルを実行してください。. So in this workflow each of them will run on your input image and you. 9 safetensors + LoRA workflow + refiner The text was updated successfully, but these errors were encountered: All reactionsComfyUI got attention recently because the developer works for StabilityAI and was able to be the first to get SDXL running. This notebook is open with private outputs. ComfyUI is new User inter. It fully supports the latest Stable Diffusion models including SDXL 1. The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. 1/1. 0 is configured to generated images with the SDXL 1. You can type in text tokens but it won’t work as well. Per the. Reload ComfyUI. They compare the results of Automatic1111 web UI and ComfyUI for SDXL, highlighting the benefits of the former. What's new in 3. The SDXL Prompt Styler is a versatile custom node within Comfy UI that streamlines the prompt styling process. Hotshot-XL is a motion module which is used with SDXL that can make amazing animations. . 0 or 1. 0 Comfyui工作流入门到进阶ep. x for ComfyUI. 0 introduces denoising_start and denoising_end options, giving you more control over the denoising process for fine. , this workflow, or any other upcoming tool support for that matter) using the prompt?Is this just a keyword appended to the prompt?You can use any SDXL checkpoint model for the Base and Refiner models. 9. Run update-v3. Usually, on the first run (just after the model was loaded) the refiner takes 1. refinerはかなりのVRAMを消費するようです。. 0 workflow. SDXL uses natural language prompts. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. Navigate to your installation folder. 9 with updated checkpoints, nothing fancy, no upscales, just straight refining from latent. 0 (26 July 2023)! Time to test it out using a no-code GUI called ComfyUI!. I've been working with connectors in 3D programs for shader creation, and the sheer (unnecessary) complexity of the networks you could (mistakenly) create for marginal (i. It takes around 18-20 sec for me using Xformers and A111 with a 3070 8GB and 16 GB ram. 以下のサイトで公開されているrefiner_v1. 1s, load VAE: 0. 24:47 Where is the ComfyUI support channel. Aug 20, 2023 7 4 Share Hello FollowFox Community! Welcome to part of the ComfyUI series, where we started from an empty canvas, and step by step, we are building up. Ive had some success using SDXL base as my initial image generator and then going entirely 1. x, SD2. 0 workflow. Detailed install instruction can be found here: Link to the readme file on Github. It might come handy as reference. In this quick episode we do a simple workflow where we upload an image into our SDXL graph inside of ComfyUI and add additional noise to produce an altered i. Re-download the latest version of the VAE and put it in your models/vae folder. SDXL has 2 text encoders on its base, and a specialty text encoder on its refiner. I don't want it to get to the point where people are just making models that are designed around looking good at displaying faces. 1. png . 0. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. 6. . The solution to that is ComfyUI, which could be viewed as a programming method as much as it is a front end. Skip to content Toggle navigation. 1 and 0. 5 comfy JSON and import it sd_1-5_to_sdxl_1-0. json file which is easily loadable into the ComfyUI environment. The base model was trained on the full range of denoising strengths while the refiner was specialized on "high-quality, high resolution data" and denoising of <0. ComfyUI seems to work with the stable-diffusion-xl-base-0. It is a Latent Diffusion Model that uses a pretrained text encoder ( OpenCLIP-ViT/G ). 9" what is the model and where to get it? Reply reply Adventurous-Abies296 The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. 0. Updated Searge-SDXL workflows for ComfyUI - Workflows v1. There are several options on how you can use SDXL model: How to install SDXL 1. r/StableDiffusion. 0 Base model used in conjunction with the SDXL 1. The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. While the normal text encoders are not "bad", you can get better results if using the special encoders. 根据官方文档,SDXL需要base和refiner两个模型联用,才能起到最佳效果。 而支持多模型联用的最佳工具,是comfyUI。 使用最为广泛的WebUI(秋叶一键包基于WebUI)只能一次加载一个模型,为了实现同等效果,需要先使用base模型文生图,再使用refiner模型图生图。3. py script, which downloaded the yolo models for person, hand, and face -. 详解SDXL ComfyUI稳定工作流程:我在Stability使用的AI艺术内部工具接下来,我们需要加载我们的SDXL基础模型(改个颜色)。一旦我们的基础模型加载完毕,我们还需要加载一个refiner,但是我们会稍后处理这个问题,不用着急。此外,我们还需要对从SDXL输出的clip进行一些处理。SDXL generations work so much better in it than in Automatic1111, because it supports using the Base and Refiner models together in the initial generation. Maybe all of this doesn't matter, but I like equations. The goal is to build up knowledge, understanding of this tool, and intuition on SDXL pipelines. This workflow uses similar concepts to my iterative, with multi-model image generation consistent with the official approach for SDXL 0. StabilityAI have release Control-LoRA for SDXL which are low-rank parameter fine tuned ControlNet for SDXL which. safetensors and sd_xl_refiner_1. webui gradio sd stable-diffusion stablediffusion stable-diffusion-webui sdxl Updated Oct 28 , 2023. Lecture 18: How Use Stable Diffusion, SDXL, ControlNet, LoRAs For FREE Without A GPU On Kaggle Like Google Colab. While the normal text encoders are not "bad", you can get better results if using the special encoders. If you do. Hi there. x for ComfyUI. 0 Checkpoint Models beyond the base and refiner stages. silenf • 2 mo. Per the announcement, SDXL 1. When you define the total number of diffusion steps you want the system to perform, the workflow will automatically allocate a certain number of those steps to each model, according to the refiner_start. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. I've been trying to use the SDXL refiner, both in my own workflows and I've copied others. useless) gains still haunts me to this day. After an entire weekend reviewing the material, I. You could add a latent upscale in the middle of the process then a image downscale in. ) Sytan SDXL ComfyUI. 🧨 Diffusers This uses more steps, has less coherence, and also skips several important factors in-between. So I want to place the latent hiresfix upscale before the. It isn't a script, but a workflow (which is generally in . 🧨 DiffusersHere's the guide to running SDXL with ComfyUI. Hello! A lot has changed since I first announced ComfyUI-CoreMLSuite. 0; Width: 896; Height: 1152; CFG Scale: 7; Steps: 30; Sampler: DPM++ 2M Karras; Prompt: As above. 0 with both the base and refiner checkpoints. Natural langauge prompts. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. 34 seconds (4m)Step 6: Using the SDXL Refiner. 手順1:ComfyUIをインストールする. 5 models. If you haven't installed it yet, you can find it here. SDXL apect ratio selection. My research organization received access to SDXL. Colab Notebook ⚡. Fine-tuned SDXL (or just the SDXL Base) All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner. This GUI provides a highly customizable, node-based interface, allowing users to intuitively place building blocks of the Stable Diffusion. GTM ComfyUI workflows including SDXL and SD1. I hope someone finds it useful. but if I run Base model (creating some images with it) without activating that extension or simply forgot to select the Refiner model, and LATER activating it, it gets OOM (out of memory) very much likely when generating images. Then this is the tutorial you were looking for. I mean, it's also possible to use it like that, but the proper intended way to use the refiner is a two-step text-to-img. I've successfully downloaded the 2 main files. safetensors and then sdxl_base_pruned_no-ema. My advice, have a go and try it out with comfyUI, its unsupported but its likely to be the first UI that works with SDXL when it fully drops on the 18th. If you have the SDXL 1. . Supports SDXL and SDXL Refiner. 5 Model works as Refiner. A detailed description can be found on the project repository site, here: Github Link. Restart ComfyUI. x for ComfyUI ; Table of Content ; Version 4. The result is a hybrid SDXL+SD1. The field of artificial intelligence has witnessed remarkable advancements in recent years, and one area that continues to impress is text-to-image. 0 設定. You will need a powerful Nvidia GPU or Google Colab to generate pictures with ComfyUI. SDXL-OneClick-ComfyUI (sdxl 1. 0 is configured to generated images with the SDXL 1. No, for ComfyUI - it isn't made specifically for SDXL. sdxl-0. Click "Load" in ComfyUI and select the SDXL-ULTIMATE-WORKFLOW. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. On the ComfyUI. A detailed description can be found on the project repository site, here: Github Link. 0以降が必要)。しばらくアップデートしていないよという方はアップデートを済ませておきま. Basically, it starts generating the image with the Base model and finishes it off with the Refiner model. 0 with both the base and refiner checkpoints. You can use the base model by it's self but for additional detail you should move to the second. Img2Img. Step 3: Download the SDXL control models. I think this is the best balanced I. It is not AnimateDiff but a different structure entirely, however Kosinkadink who makes the AnimateDiff ComfyUI nodes got it working and I worked with one of the creators to figure out the right settings to get it to give good outputs. 2 noise value it changed quite a bit of face. Model type: Diffusion-based text-to-image generative model. So if ComfyUI / A1111 sd-webui can't read the image metadata, open the last image in a text editor to read the details. In the case you want to generate an image in 30 steps. 0 almost makes it. Here are the configuration settings for the SDXL. Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. safetensors and sd_xl_base_0. This is an answer that someone corrects. 5. There is no such thing as an SD 1. Got playing with SDXL and wow! It's as good as they stay. If. Stability is proud to announce the release of SDXL 1. Reduce the denoise ratio to something like . 点击load,选择你刚才下载的json脚本. The refiner refines the image making an existing image better. None of them works. There is an SDXL 0. fix will act as a refiner that will still use the Lora. I did extensive testing and found that at 13/7, the base does the heavy lifting on the low-frequency information, and the refiner handles the high-frequency information, and neither of them interferes with the other's specialtysdxl_v1. Yes, there would need to be separate LoRAs trained for the base and refiner models. 1 latent. WAS Node Suite. SEGSDetailer - Performs detailed work on SEGS without pasting it back onto the original image. 51 denoising. +Use SDXL Refiner as Img2Img and feed your pictures. For me, it has been tough, but I see the absolute power of the node-based generation (and efficiency). SDXL 1. Fine-tuned SDXL (or just the SDXL Base) All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner. @bmc-synth You can use base and/or refiner to further process any kind of image, if you go through img2img (out of latent space) and proper denoising control. Think of the quality of 1. 0. 0. (introduced 11/10/23). Locate this file, then follow the following path: Is there an explanation for how to use the refiner in ComfyUI? You can just use someone elses workflow of 0. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. for - SDXL. (especially with SDXL which can work in plenty of aspect ratios). Hi all, As per this thread it was identified that the VAE on release had an issue that could cause artifacts in fine details of images. If you haven't installed it yet, you can find it here. I'm not sure if it will be helpful to your particular use case because it uses SDXL programmatically and it sounds like you might be using the ComfyUI? Not totally. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely. I’m going to discuss…11:29 ComfyUI generated base and refiner images. After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. launch as usual and wait for it to install updates. go to img2img, choose batch, dropdown refiner, use the folder in 1 as input and the folder in 2 as output. I tried with two checkpoint combinations but got the same results : sd_xl_base_0. ใน Tutorial นี้ เพื่อนๆ จะได้เรียนรู้วิธีสร้างภาพ AI แรกของคุณโดยใช้เครื่องมือ Stable Diffusion ComfyUI. 0 base and have lots of fun with it. ai art, comfyui, stable diffusion. 5. FromDetailer (SDXL/pipe), BasicPipe -> DetailerPipe (SDXL), Edit DetailerPipe (SDXL) - These are pipe functions used in Detailer for utilizing the refiner model of SDXL. 9 and Stable Diffusion 1. 1 Workflow - Complejo - for Base+Refiner and Upscaling; 1. I can't emphasize that enough. Efficient Controllable Generation for SDXL with T2I-Adapters. Study this workflow and notes to understand the. ⚠️ IMPORTANT: Due to shifts in priorities and a decreased interest in this project from my end, this repository will no longer receive updates or maintenance. Do you have ComfyUI manager. We are releasing two new diffusion models for research purposes: SDXL-base-0. Must be the architecture. It provides a super convenient UI and smart features like saving workflow metadata in the resulting PNG images. Searge-SDXL: EVOLVED v4. Sign up Product Actions. safetensors. Currently, a beta version is out, which you can find info about at AnimateDiff. The CLIP Text Encode SDXL (Advanced) node provides the same settings as its non SDXL version. 6. The following images can be loaded in ComfyUI to get the full workflow. conda activate automatic. When I run them through 4x_NMKD-Siax_200k upscaler for example, the. Fooocus uses its own advanced k-diffusion sampling that ensures seamless, native, and continuous swap in a refiner setup. Example workflow can be loaded downloading the image and drag-drop on comfyUI home page. 2 (1Tb+2Tb), it has a NVidia RTX 3060 with only 6GB of VRAM and a Ryzen 7 6800HS CPU. Aug 20, 2023 7 4 Share Hello FollowFox Community! Welcome to part of the ComfyUI series, where we started from an empty canvas, and step by step, we are building up SDXL workflows. With usable demo interfaces for ComfyUI to use the models (see below)! After test, it is also useful on SDXL-1. x models through the SDXL refiner, for whatever that's worth! Use Loras, TIs, etc, in the style of SDXL, and see what more you can do. Most UI's req. 0以降が必要)。しばらくアップデートしていないよという方はアップデートを済ませておきましょう。 generate a bunch of txt2img using base. FWIW latest ComfyUI does launch and renders some images with SDXL on my EC2. You can assign the first 20 steps to the base model and delegate the remaining steps to the refiner model. Click “Manager” in comfyUI, then ‘Install missing custom nodes’. 9 comfyui (i would prefere to use a1111) i'm running a rtx 2060 6gb vram laptop and it takes about 6-8m for a 1080x1080 image with 20 base steps & 15 refiner steps edit: im using Olivio's first set up(no upscaler) edit: after the first run i get a 1080x1080 image (including the refining) in Prompt executed in 240. Upscaling ComfyUI workflow. 10:05 Starting to compare Automatic1111 Web UI with ComfyUI for SDXL. 9 the latest Stable. 色々細かいSDXLによる生成もこんな感じのノードベースで扱えちゃう。 852話さんが生成してくれたAnimateDiffによる動画も興味あるんですが、Automatic1111とのノードの違いなんかの解説も出てきて、これは使わねばという気持ちになってきました。1. Save the image and drop it into ComfyUI. The initial image in the Load Image node. Here's a simple workflow in ComfyUI to do this with basic latent upscaling: The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. Switch (image,mask), Switch (latent), Switch (SEGS) - Among multiple inputs, it selects the input designated by the selector and outputs it. I can run SDXL 1024 on comfyui with an 2070/8GB smoother than I could run 1. json file which is easily loadable into the ComfyUI environment. Study this workflow and notes to understand the basics of ComfyUI, SDXL, and Refiner workflow. 4. You can get the ComfyUi worflow here . 9: The base model was trained on a variety of aspect ratios on images with resolution 1024^2. 0. The lost of details from upscaling is made up later with the finetuner and refiner sampling. Selector to change the split behavior of the negative prompt. Table of Content ; Searge-SDXL: EVOLVED v4. Klash_Brandy_Koot. Special thanks to @WinstonWoof and @Danamir for their contributions! ; SDXL Prompt Styler: Minor changes to output names and printed log prompt. Part 4 (this post) - We will install custom nodes and build out workflows. Custom nodes and workflows for SDXL in ComfyUI. 0 base and refiner models with AUTOMATIC1111's Stable Diffusion WebUI. 15:49 How to disable refiner or nodes of ComfyUI. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. When trying to execute, it refers to the missing file "sd_xl_refiner_0. Upcoming features:This is the image I created using ComfyUI, utilizing Dream ShaperXL 1.