Posts
Image size comfyui reddit
Image size comfyui reddit. The hard part is knowing when the image is ready to be retreived and getting the image. Also the exact same position of the body. This workflow generates an image with SD1. How to Magically Resize Your Images: The 1024px Rule That Will Change Everything. and see if you can get the image size to be used for the empty latent (converted) height and width (later on - empty Welcome to the unofficial ComfyUI subreddit. To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. In an effort the generate images faster on my potato pc. This youtube video should help answer your questions. If I were to make some type of custom node or modify the core node and allow a larger latent image size, would that break the whole process and there is some larger reason that 8192 is the hard Welcome to the unofficial ComfyUI subreddit. Batch index counts from 0 and is used to select a target in your batched images Length defines the ammount of images after the target to send ahead. Hello, Stable Diffusion enthusiasts! We decided to create a new educational series on SDXL and ComfyUI (it's free, no paywall or anything). Enjoy a comfortable and intuitive painting app. you wont get obvious seams or strange lines Welcome to the unofficial ComfyUI subreddit. However, my goal is to recreate the exact same image, I understand that the DPM++2M model can do this, at least in auto11 it does repeat the same image all the time. And above all, BE NICE. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. How do I do the same with ComfyUI? Welcome to the unofficial ComfyUI subreddit. It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. To then view the generated images click on View History and go through your generations by loading them. Automatic1111 would let you pick the final image size no matter what and give you options for crop, just resize, etc. The first branch has: Txt to Image and then Image to SDVID with the new SD vid models that came out. In the process, we also discuss SDXL architecture, how it is supp During my img2img experiments with 3072x3072 images, I noticed a quality drop using Hypertile with standard settings (tile size 256, swap size = 2, max depth = 0). In this case, the image from comfy has some extra glitches. you can just plug the width and height from get image size directly into nodes where you need it too. Aug 21, 2023 · If we want to change the image size of our ComfyUI Stable Diffusion image generator, we have to type the width and height. so I would assume generating 4 images (with the `batch_size` property) would give me four images with seeds `1`, `2 I think the intended workflow here is to just press several times on the Queue Prompt button. I have managed to push it down to 3 steps with some nifty tricks I found The demo images aren't curated, all images just use the seed "3" with a basic prompt, so this is really useful for experimenting. The option has been around for a long time with other UIs like Automatic1111 and Visions of Chaos. I have a workflow I use fairly often where I convert or upscale images using ControlNet. I can obviously pick a size when doing Text2Image but when prompting off an existing image my final image will always just be the same size as the inspiration image. Stable Diffusion 1. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) When I generate an image with the prompt "attractive woman" in ComfyUI, I get the exact same face for every image I create. I have a ComfyUI workflow that produces great results. i do that alot. The one that is shown in the "post view" is a "preview JPEG" (even though it looks as if it is full size) which does not have the metadata. Stable diffusion has a bad understanding of relative terms, try prompting: "a puppy and a kitten, the puppy on the left and the kitten on the right" to see what I mean. Here’s how you can do it: Automatic1111 May 14, 2024 · A basic description on a couple of ways to resize your photos or images so that they will work in ComfyUI. comfy-multiline-input { font-size: 10px; } ComfyShop has been introduced to the ComfyI2I family. You will need to launch comfyUI with this option each time, so modify your bat file or launch script. In the process, we also discuss SDXL architecture, how it is supp Welcome to the unofficial ComfyUI subreddit. If you just want to see the size of an image you can open an image in a seperate tab of your browser and look up top to find the resolution too. comfyui image. Here, you can also set the batch size , which is how many images you generate in each run. This way its an end-to-end txt to animation. . Input your batched latent and vae. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) and no workflow metadata will be saved in any image. You probably still want an Exif Viewer/Remover/Cleaner to double check images since you haven't been using this setting and presumably have prior work to sanitize of metadata. 5 is trained on images 512 x 512. This will be follow-along type step-by-step tutorials where we start from an empty ComfyUI canvas and slowly implement SDXL. In truth, 'AI' never stole anything, any more than you 'steal' from the people who's images you have looked at when their images influence your own art; and while anyone can use an AI tool to make art, having an idea for a picture in your head, and getting any generative system to actually replicate that takes a considerable amount of skill and effort. Please share your tips, tricks, and workflows for using this software to create your AI art. Also, if this is new and exciting to you, feel free to post Posted by u/tobi1577 - 216 votes and 49 comments Welcome to the unofficial ComfyUI subreddit. Hey everyone, I've been exploring the possibility of using an image as input and generating an output image that retains the original input's dimensions. I've built many ComfyUI web apps for personal business purposes and have helped others on Reddit as well. First we calculate the ratios, or we use a text file where we Mar 22, 2024 · This simple checkbox in the Automatic1111 WebUI interface allows you to generate high-resolution images that look much better than the default output. No, you don't erase the image. A transparent PNG in the original size with only the newly inpainted part will be generated. Belittling their efforts will get you banned. A lot of people are just discovering this technology, and want to show off what they created. I have tried to push down the sampling step count as low as possible. A bit of an obtuse take. I want to upscale my image with a model, and then select the final size of it. This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. Please keep posted images SFW. I would like to know if that is due to some reason other than images that large take a long time. ComfyShop phase 1 is to establish the basic painting features for ComfyUI. Generated images automatic1111 image. So you have the preview and a button to continue the workflow, but no mask and you would need to add a save image after this node in your workflow. With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into the bigger image. It is not a problem in the seed, because I tried different seeds. Probably not what you want but, the preview chooser\image chooser node is a custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow. 8 so that some of the structure of the original image generated is retained. Works great. Ignore the LoRA node that makes the result look EXACTLY like my girlfriend. Save the new image. I know i can run the img to vid portion with 512 x 512 input image but im struggling trying to downscale the image by 2. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you really want to use this process, my advice would be to generate subject smaller and then crop in and upscale instead. So I can't give a simple answer but I'd say if you're still interested and need some help we can join a discord call or something and I can help. Howdy! I'm not too advanced with ComfyUI for SD generation yet, but I've made a lot of progress thanks to your help. Layer copy & paste this PNG on top of the original in your go to image editing software. You can't enter a latent image size larger than 8192. Is there a way to pull this off within ComfyUI? Welcome to the unofficial ComfyUI subreddit. As an input I use various image sizes and find I have to manually enter the image size in the Empty Latent Image node that leads to the KSampler each time I work on a new image. - comfyanonymous/ComfyUI Copy that into user. When I do the same in Automatic1111, I get completely different people and different compositions for every image. I started with ComfyUI 3 days ago. Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. It animates 16 frames and uses the looping context options to make a video that loops. /* Put custom styles here */ . Or add the Image Gallery extension. You set the height and the width to change the image size in pixel space. Im instead going to just try to work around it but trying to downscale the size of the image. A mask adds a layer to the image that tells comfyui what area of the image to apply the prompt too. I first get the prompt working as a list of the basic contents of your image. I think the bare minimum would be the following but having the rest of the defaults next to it could be handy if you want to make other changes. Stable Diffusion XL is Jul 6, 2024 · So, if you want to change the size of the image, you change the size of the latent image. Welcome to the unofficial ComfyUI subreddit. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. New users of civitai should be aware the PNG (which contains the metadata) can only be downloaded from the "image view". can prettymuch be scaled to whatever batch size by repetition. css and change the font-size to something higher than 10px and you should see a difference. The denoise on the video generation KSampler is at 0. I have a workflow that is basically two user branches. (using SD webUI before) I am getting blurry image when using "Realities Edge XL ⊢ ⋅ LCM+SDXLTurbo" model in ComfyUI I got the same issue in SD webUI but after using sdxl-vae-fp16-fix, images are good But when I try to use the same to fix this issue, not working. Increasing the tile size to half the image's dimensions (1536) does improve image quality, but the speed benefit diminishes. (207) ComfyUI Artist Inpainting Tutorial - YouTube Welcome to the unofficial ComfyUI subreddit. Want 10 images? Click that button till the Queue size is 10 (or select Extra options and put in 10 in Batch count). The only way I can think of is just Upscale Image Model (4xultrasharp), get my image to 4096 , and then downscale with nearest-extact back to 1500.
xej
hli
tci
tkliz
zlvyrty
ktgi
bfag
blgpfnt
ljr
valy