Comfyui face reddit
Comfyui face reddit. There should be a image batch to count node or something in impact pack that will let you put in batches of images to go one by one automatically in face detailer. And while there are other solutions being created, there aren't any nodes that are crafted to work with them at Detailed ComfyUI Face Inpainting Tutorial (Part 1) 22K subscribers in the comfyui community. Any guidance appreciated!) Thanks, Fred. ago. ) These games tend to focus heavily on role-play and autonomy through the application of a player's chosen attributes and skills. ComfyUI usually caches results from previous executions so that if you tinker with some "later" nodes down the line it won't re-execute an entire workflow every single time. in your workflow HandsRefiner works as a detailer for the properly generated hands, it is not a "fixer" for wrong anatomy - I say it because I have the same workflow myself (unless if you are trying to connect some depth controlnet to that detailer node) 22K reactor face swap. Ahhh, so mine has been working without using SAM, but it works on the full box around the face including the head. For general upscaling of photos go: remacri 4x upscale. File "C:\Users\fredl\Documents\ComfyUI-Port\ComfyUI_windows_portable\ComfyUI\ execution. Please share your tips, tricks, and workflows for using this…. It seems I may have made a mistake in my setup, as the results for the faces after Adetailer are not r/comfyui. You can actually see this effect when you re-execute the workflow and see node border highlights quickly pass through "earlier" nodes that didn't Check the number of denoising steps in your face detailer. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. This distortion is likely a result of image resolution and quality issues. Or you can just use IPAdapter, in Welcome to the unofficial ComfyUI subreddit. Hello all, it turns out that, while generating photos with the reactor node, some turn out fine and some turn out extremely blurry. It depends on how large the face in your original composition is. I think I remember seeing a ComfyUI module that allows you to supply 1 to N facial images to steer face generation in txt2img. Add the controlnet picture to corresponding image loader. Any face that is larger than 128p will be fuxxored. Maybe the source image you're using is of low quality. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. So I made a workflow to genetate multiple options of fixed using amazing ImpactPack, and then to choose and paste the best one into the original picture. Are we there yet ? Newbie here. For example to add skin detail like freckles, moles, pores, whatever without hosing a face swap. I use comfy Ui to help me improve my line drawing. IDM-VTON (Virtual Try-on) is Just Another Level - Extreme Robustness and Accuracy - Very fun to play - Hugging Face Demo Link in the Comments 10 upvotes · comments Check the console. 0 for ComfyUI - Now with support for SD 1. Select "install missing nodes" in extensions manager (install beforehand if needed) Download the 9 faces openpose picture from the Auto1111 workflow. The amazing MeshGraphormer understands the correct depth map for hands. EDIT: updated the photo using the workflow linked below which downscales the image before upscaling, seems to help reduce the oversharpening effect and is also easier on lower VRAM cards. 3) to minimize the changes it makes ot the image, but tweak it to ensure is cleaning up the faceswap artifacts/low res. This can happen when the reference face is very zoomed in. Hi all, having finally taken the plunge and moved from A1111 to Comfy I'm loving it, so much quicker and flexible than A1111. If it’s a close up then fix the face first. All packages were forked directly from the #! repositories/Github and changed only where necessary to keep it up to date with newer packages. I bet it says something like "No face detected". Get a good quality headshot, square format, showing just the face. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. I attached the comfyui flow. Optionally add a face picture to IPAdapter image loader (otherwise right click the "Apply IPAdapter This is Reddit's home for Computer Role Playing Games, better known as the CRPG subgenre! CRPGs are characterized by the adaptation of pen-and-paper RPG, or tabletop RPGs, to computers (and later, consoles. Nothing else changed. Expanding on my temporal consistency method for a 30 second, 2048x4096 pixel total override animation. Like you can't put in a SD1. 2. 1. In the last issue, we introduced how to use ComfyUI to generate an App Logo, and in this issue we are going to explain how to use ComfyUI for face swapping. You probably want to set it very low (0. K12sysadmin is for K12 techs. If there are female faces in the background that you want to ignore, then Had this prob the other day also when I updated impact pack, try to place a new face detailer node, if that doesnt work I recommend just reinstalling impact pack (thats what I had to do). output_data, output_ui = get_output_data It's called FaceDetailer in ComfyUI but you'd have to add a few dozen extra nodes to get all the functionality of the adetailer extension. although compfyui is great for other stuff. I recommend using 512x512 square here. The first method is to use the ReActor plugin, and the results achieved with this method would look something like this: Setting up the Workflow is straightforward. But using facedetailer is easier. I draw a jumping pose, AI filled in the detail. The result are that or that i got portrait of character in "closeup" manner. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. For anime look, i suggest inpainting the face afterward, but you want to experiment with the denoise level. The_Meridian_. Typically resulted in cutoff head, annoying head positions, etc. resize down to what you want. Anyway we've recently been messing around with ComfyUI and it's really cool. Upscale and then fix will work better here. The result being the same character in different scenes, poses etc. In researching InPainting using SDXL 1. FaceDetailer, select skin instead of face/body/hands. With a lower resolution, if it's a full-body shot, the amount of information needed to represent complex facial features becomes extremely limited, which can result in such issues in Stable Diffusion. Ah, That might be it. 5-1. So in this workflow each of them will run on your input image and you b0nyb0y. And if you don’t know what ControlNet it, look it up on YouTube first. Use the latent/pixel image and feed it the face swapper. If you didn't know, they are different than diffusion checkpoints. You could also use ReActor to simply swap in a face you like. Hi! Welcome to the unofficial ComfyUI subreddit. There's a node called GroundingDINO Segment Anything (or something very similar) and it lets you do masking by prompt, so you could prompt for "female face" and it will mask all female faces detected. Render face in a ksampler, using the mask from step 2. The documentation has basically nothing on it but that is field you want to increase. This is the first time I see Face Hand adetailer in Comfyui workflow. The higher the weight of your IP adapter the more it is going to try to keep it as close to that face as possible. I've managed to mimic some of the extensions features in my Comfy workflow, but if anyone knows of a more robust copycat approach to get extra adetailer options working in ComfyUI then I'd love to see it. Plus put a lot of weight on your prompt with maybe "looking to the right", "profile picture" etc. ComfyUI runs in a browser - try running it full screen or hide those bars from the browser settings menu. If not, go into settings and see if the option to “remember” (or lock) the manager menu is on. I'm using ComfyUI with Stable Cascade and I would like to fix only the eyes during to process. Please share your tips, tricks, and workflows for using this software to create your AI art. right click on the node and click convert force_inpaint to widget. If it is, turn it off. To best explain: Generate a made up character through a prompt. I separate the refiner into two different area which is the face and the area outside the face. The tool attempts to detail every face, which significantly slows down the process and compromises the quality of the results. I draw a pose, then I use AI to fill it. Join the discussion and share your results on r/comfyui. Face swapping with reactor node - some faces look terribly blurry. I also tried using the FaceDetailer custom node, but I'm not sure I can make it to behave the way I want (not changing 2. I'm pretty happy with my workflow but when I swap faces I lose so many detail (like brushed face), so I'm Step 1: Generate some face images, or find an existing one to use. It is made for animateDiff. I wanted a workflow clean, easy to understand and fast. The images generally depict individuals, and my main objective is to change the color of their clothing. updated 2nd attempt SUPIR. PLANET OF THE APES - Stable Diffusion Temporal Consistency. Experimental Functions Welcome to the unofficial ComfyUI subreddit. GFPGAN. Side by side comparison with the original. Problem with face consistent & Controlnet together. I would love to see if I can do a complete head swap with that same concept! I do it using comfyui with reactor plus loadvideo and savevideo in the n-suite plugin and standard load image for the face to insert. Use that character's face in different prompts. Back to your question - do a VAE Encode with image resulted from your Roop and run it throu a low denoise KSampler. Activate the Face Swapper via the auxiliary switch in the Functions section of the workflow. SUPIR upscaler is incredible for keeping coherence of a face. However the one thing I cannot for the life of me get working properly is face swap. I wonder if any of you happens to know the reason for this. Notice that Face Swapper can work in conjunction with the Upscaler. Ideally would resize overlay to fit rough scale of mask, avoid eyes, etc. cows longing absorbed alleged crush aspiring forgetful wild snatch uppity. The 128 model always seems soft and grainy IMO. 0. I’m looking for a good img2img full body workflow that also has the ability to add an take the pose add an existing face over the ai one and the. A lot of people are just discovering this technology, and want to show off what they created. You can also just search for "Face Restore" through the ComfyUI manager and you'll find it there as well. 5 and IP adapter FaceID. Well I can get it working fine (I'm using ReActor but have tried vanilla roop), but the faces are just all pretty low-res. ComfyUI is also trivial to extend with custom nodes. 11: Gourieff/comfyui-reactor-node: Fast and Simple Face Swap Extension Node for ComfyUI (github. Nice having the option, thanks. K12sysadmin is open to view and closed to post. Reactor is infinitely better if you properly condition the face before the swap. You can swap faces in individual images or all images in a directory. Been using the face swap for a while - all of the sudden node disappered - turns out its failing to import - anyone else had this particular issue - totally confused - have tried reinstalling it - both from github and via the node manager. I always run in a window. can you show us a screenshot of the keep gen with different seed and lower the strength to see if you got lucky and get the one that look to the way you want. This is my first post on reddit, please do not judge too strict. I need the face to have controlnet as well so it keeps the same expression. I have found that this example produces the best results without messing with the background. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Welcome to the unofficial ComfyUI subreddit. Go to the ControlNet tab, activate it and use "ip-adapter_face_id_plus" as preprocessor and "ip-adapter-faceid-plus_sd15" as the model. It isn't always the most intuitive process in comfyui, but once you get used to the nodes you need it's fairly straightforward. Workflows are much more easily reproducible and versionable. The mask is only applied on the cloth and not on the face. Use it as image input and of you go. Learn how to use NATIVE instantID, a new feature of ComfyUI that lets you create realistic faces from any ID photo. Blow it up with a resize upscale Then run a separate ksampler. Also there's a skin texture node (forgot the name) it works really well for face details. Face shape and geometry matter a lot if you want good results. You can apply loras to this KSampler. Passed though face detailer and finally upscale . --. r/comfyui. I'm trying to create a workflow that automatically detects and sharpens the faces of a picture. 5. com). He gives you links to the Python 3. 👍🏼. Off the top of my head: Render first pass image, the use iterativelatentupscale to double the size. The issue you're facing is basically what made me try ComfyUI in the first place. Run the WebUI. Detail only largest face? (ComfyUI-Impact-Pack) I'm utilizing the Detailer (SEGS) from the ComfyUI-Impact-Pack and am encountering a challenge in crowded scenes. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt Welcome to the unofficial ComfyUI subreddit. Sharpening faces. ) didn't work well. Belittling their efforts will get you banned. 4. Forget face swap. #!++ a lightweight Debian-based distribution featuring the Openbox and GTK+ applications. Or, if you use ControlNetApplyAdvanced, which has inputs and outputs for both positive and negative conditioning, feed both the +ve and -ve ClipTextEncode nodes into the +ve and -ve inputs of ControlNetApplyAdvanced. The AI generated source image can be loaded statically (for example a female warrior You'll find them here mav-rik/facerestore_cf: ComfyUI Custom node that supports face restore models and supports CodeFormer Fidelity parameter (github. FuegoInfinito. Is there anything wrong with my settings or anything I could do to prevent blurry images? Add a face detailer node to your character with the nodes in the impact pack. I use an IPAdapter to inject my usual model checkpoint with a certain likeness I want it to emulate during face detailing; this works fairly well. Then decode and save. 09, 3. Am I crazy or does such a module exist? NerdyRodent did a video on this Welcome to the unofficial ComfyUI subreddit. Not a module like ReActor or ROOP, where you replace the face after generation, but almost like training a lora to generate an image with the face. VID2VID_Animatediff. Is there any other simpler way? I had fun making this workflow, the goal was to use refiner and upscaler without really changing the face too much. ReActor. Please keep posted images SFW. Works great unless dicks get in the way ;+} Absolutely. Detailer from the ImpactPack makes multiple fix options. Hi, newbie here. Or change the picture that you feed into IPA with the one that look to the side you want. py ", line 154, in recursive_execute. Following this, I utilize Facedetailer to enhance faces (similar to Adetailer for A1111). Workflow Included. I use facedetailer with lighting using recommended settings, 4 steps, low cgf and works fine. You should be able to drag it anywhere. 1-0. Choose a weight between 0. CrunchBangPlusPlus (or #!++) is an effort to continue the #! environment. If you use a rectangular image, the IP Adapter preprocessor will crop it from the center to a square, so you may get a cropped-off face. The lower the weight you set the more freedom the diffusion has to add in lion/dog/cat/moose etc features to the face. It will depend on the size of the image you pass in as to what value it should be but generally the drop size should be 10% of the size of the image passed in (on the short side) to avoid fixing background/minor faces. So sounds like if I add SAM then it would do only the face. Input image from SDXL + Square source image through the reactor node (works with the 128x128 limitation) IPAdapter (my goal because of the versatility and less limitations (like the 128x128 from Reactor) (using the workflow for Reactor will gove airbrushed looks because the resolution of reactor is 128p. If you want some some face likeness, try detailing the the face using impact pack, but use the old mmdet model, because the new utralytic model is realistic. 0 you can save face models as "safetensors" files (stored in ComfyUI\models\reactor\faces) and load them into ReActor implementing different scenarios and keeping super lightweight face models of the faces you use. But it is easy to modify it for SVD or even SDXL Turbo. reactor build face model is really good! I wish they had larger face swap model files. danamir_. I think the ultimate workflow involves inpainting the full head/hair using FaceIDv2 for likeness (can use multiple input images here), then do a reactor swap to get a very accurate face (while sacrificing details), then inpaint with FaceIDv2 again with enough Welcome to the unofficial ComfyUI subreddit. This post was mass deleted and anonymized with Redact. Hello~ I’m working with the impact facedetailer nodes but can’t seem to make them detect large faces, only medium and small size faces I have tried setting the guide size and max size to 1024 but it still will only select medium / small faces Since version 0. 10 and 3. Reactor. Reply. The Model output from your final Apply IDApapter should connect Welcome to the unofficial ComfyUI subreddit. 4) Then you can cut out face and redo-it with IP Adapter. I've tried playing around with the ImageSharpen node, but I failed to apply it to faces only. true. Upload your desired face image in this ControlNet tab. digitaljohn. I set up a workflow for first pass and highres pass. The t-shirt and face were created separately with the method and recombined. The model you linked is for SD2 though. Anyone else had this particular hiccup>? 4. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) I wanted to be able to generate a simple square image of a well framed head and face that could be fed without modification to the ReActor face swap node, I found the normal shot control prompts (medium closeup, straight on, looking at viewer, etc. The body is OK. What we're trying to do is partial face replacement, maybe with masking out sections of the "load image" or something in conjunction with the ReActor but can't seem to figure it out. This will allow detail to be built in during the upscale. Exists? Detect and Mask Face, then Overlay transparent layer. Stopgap, i'm using a 1/5 model for the detailer, but if anyone has suggestions or a solution I'd love to hear them. The face is pretty good. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. 5 model in there since they are different shapes. Hi! I just made the move from A1111 to ComfyUI a few days ago. Experimental Functions Help with Facedetailer workflow SDXL. • 1 mo. As far as I know, codeformer and GFPGAN are the only two models available at this point. I want to generate avatar images of people having animal features, preferably using SD1. Swap face, pass it through restore. Comfyui is much better suited for studio use than other GUIs available now. Download the model put it in your controlnet folder. When I use FaceDetailer at the same steps, CFG, etc i get mottled and CFG-burn looking results. I'm sure that someone knows of such a workflow. this goes on for quite a few lines and then at the end the below: (I've been going through google and bing but not making much progress. For the face detailer, wire in separate positive and negative prompts that use the same text you used to generate the face. Invert the mask from step 2 (making it the background) and pass 4. If you have a reference, you may use reactor or even both. Any suggestions would be appreciated. As another person stated quality is determined by inswapper but they still look okayish after face restore. I managed to get the face to be 80-90% same in the output compared to Drag and drop on your running ComfyUI. 4 alpha 0. It's not about what you generated from, but because the resolution you generated is different. 1st SUPIR attempt, way too sharp. Use a openpose preprocessor with face support. sharpen (radius 1 sigma 0. Maintaining faces. What can help is if you edit the reference image and put some very thick white borders around it. com) Reply reply Select-Difficulty-83 Welcome to the unofficial ComfyUI subreddit. Roop isn't that good for anything else than a face with a closed mouth, teeth are usually out of the question. Or that I get a the position of the controlnet, without the correct consistent face that i load . Get the Segment Anything custom node pack if you don't already have it. I have two workflows to check different results. VID2VID_Animatediff + HiRes Fix + Face Detailer + Hand Detailer + Upscaler + Mask Editor. Help on face detailing on ComfyUI workflow (add lora after face swap) Hi guys, I used a111 for some time and now I'm switching to comfyui (portable). anyone have any recommendations or preexisting workflows AP Workflow 6. 3. And I mean low (0. mtb node has face swap, kinda like roop, but not as good as training with lora. • 3 mo. Wire in an IP adapter to the face detailer ksampler. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. And above all, BE NICE. I created a workflow. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision [x-post] ClipTextEncode (positive) -> ControlnetApply -> Use Everywhere. But that requires it not to use the same controlnet as the one used with the Ksampler because it gets a strange face made on it on top of the existing face. However, I've noticed that in some instances, there is distortion in the facial features. In either case, you must load the target image in the I2I section of the workflow. leomozoloa. For example on a beach wearing swimmers, in the city wearing a coat, in the park wearing a suit but all have the same body build. Like you would with any ControlNet models. 1K subscribers in the comfyui community. 3 MAX). 25-0. I love that and I'm trying to find a good workflow for my image creation + face swapping. Produces great results. Go to the Lora tab and use the LoRA named "ip-adapter-faceid-plus_sd15_lora" in the positive prompt. TXT2VID_AnimateDiff. Use IP Adapter for face. Consider using the FaceDetailer node and hooking up your LoRA to the model used for face detailing only. Thanks for the help! Welcome to the unofficial ComfyUI subreddit. I’m leaning towards using the new face models in ipadaptor plus . If you're using an SDXL model definitely use the add details Lora. 0. VFX artists are also typically very familiar with node based UIs as they are very common in that space. I gave up using any face swapping using comfyui; the result are always weird or innacurate; or it's just that i'm too retard, anyway. . . SDXL struggles with faces: all faces are deformed or look like stylized hollywood stars Question | Help While the general style, the depth between objects or people, and the overall composition and tone have significantly improved with the new SDXL, I have noticed that faces, particularly those of males, seem to have deteriorated in quality. To add content, your account must be vetted/verified. I rotated same pose drawing sideway, change the prompt to "lay flat on ground". I'm leaving aside the fact that inswaper128 model (which is used in roop) was designed for swapping faces quite small 128x128 everything bigger is the work of an upscaler (GAN, codeformer, etc). zx lz lx fe sf mx cs sb sw of