Fooocus 2.5.0: Enhance #3281
Replies: 11 comments 15 replies
-
Appreciate the updates and your hard work on maintaining Fooocus! Enhance works really well. Can you verify that for Pony models you still have to prompt the score? E.g score_9, score_8_up, score_7_up, score_6_up. From my tests so far it looks like you still have to. Just want to verify. |
Beta Was this translation helpful? Give feedback.
-
May God bless you in his holy glory, thank you for your updates to my favorite tool. |
Beta Was this translation helpful? Give feedback.
-
Can i use fooocus 2.5.0 enhance in google colab, ? if yes, then how. |
Beta Was this translation helpful? Give feedback.
-
This is great! Thank you! I played around and it's very cool. One thing I am wondering - if someone is wearing a t-shirt, how can I say make them wear a suit and tie? I tried a couple things but it seems confined to the bounds of the original clothing. Is there a way to segment this so that it uses more area beyond the bounds of the clothing? I tried "Below the neck and body" "Body' etc. |
Beta Was this translation helpful? Give feedback.
-
Hi, many thanks for the new release and the hard work. I am currently experimenting Enhance on an upload image (in the related section), it works very well with the face but fails while trying to enhance the body, or the torso, I mean it inpaint/enhance also the face when I don't request it explicitly in the enhance prompt. Is there a way to avoid this behavior? Am i doing any wrong steps? Thanks for your feedback |
Beta Was this translation helpful? Give feedback.
-
Thank you so much for the work. I see 'translate prompt' option as well but didn't find after git pull. was that discarded? |
Beta Was this translation helpful? Give feedback.
-
Thank you for your hard work! Great product! |
Beta Was this translation helpful? Give feedback.
-
Thank you it does work well. I observe a certain behaviour, when Inpainting then enhancing with Upscale or Variant, it seems that the same seed is used for all the enhance Upscaling/Variant. Meaning that when generating 2 inpaints, the 2 Upscale/Variant have the same features out of the masked area (which undergone heavy changes). I'd expected the unmasked area to get a differnt variant from the upscale factor, but they get the very same result. Is this expected? |
Beta Was this translation helpful? Give feedback.
-
This is amazing. Just finished a course that pointed me towards Fooocus. Keep up the great work! |
Beta Was this translation helpful? Give feedback.
-
I'm new here and I'm using fooocus via pinokio. How to run mashb1t's fork with pinokio? |
Beta Was this translation helpful? Give feedback.
-
Я тут новачок і використовую fooocus підкажіть як використовувіти api |
Beta Was this translation helpful? Give feedback.
-
Fooocus 2.5.0: Enhance
This release includes a feature requested multiple times by the community (e.g. in #3113, #3089, #3039, and a few more, also see #3122). Implementation by @mashb1t in v2.5.0 (fork) / mashb1t#42 and now available in main.
What does this feature do?
Enhance allows you to automatically upscale and/or improve parts of the picture based on either a prompt or an input image.
It is comparable to ADetailer (repository), but offers better and more flexible object detection and replacement with detection and replacement prompts instead of static detection models, each having ~140MB.
How do i use it?
Disclaimer
It is highly recommended to use performance
Speed
orQuality
(no performance which loads a LoRA) as any inpaint engine uses a LoRA, which may not produce the best results when combined with other performance LoRAs.Using inpaint mode
Improve Detail (face, hand, eyes, etc.)
does not set an inpaint engine, making it compatible with all performances. The documentation of inpaint modes can be found here: #414All of this is also the case for normal inpainting without enhancements.
ControlNets (
ImagePrompt
,PyraCanny
,CPDS
,FaceSwap
) are currently not supported for enhance steps but can be used for image generation used as basis for enhance.With image generation
2.1. (optional) Enable and define order upscaling or variation (default disabled)
2.2. Enable and configure any amount of other improvement steps.
2.3. input detection prompt (what you want to detect in the image)
2.4. input positive / negative prompt (what you want to replace the detected masks with, defaults to your normal prompts if not set)
Based on an existing image
Simply open Image Input and upload an image to the Enhance tab. Prompt processing will be skipped and only enhancement steps are processed. Follow steps 2+ above.
You may set
--enable-auto-describe-image
to automatically generate the prompt after image upload.Examples
UI
#1
Yellow Sundress#2
Hands replacementUpscaling or Variation
Before First Enhancement
After Last Enhancement
Models
By default it uses the SAM (website, repository) masking model, backed by GroundingDINO (paper, diffusers docs), but offers support for all additional models currently supported by RemBG (repository). GroundingDINO + SAM do not use RemBG as handler, but have natively been implemented into focus for even better results and increased level control.
Currently supported models:
Tech Debt / Code Improvements
While implementing the enhance feature, multiple methods have been introduced to make code reusable and allow for iterations.
The whole async_worker.py has been restructured and is now much more clear to read and easier to use.
Debugging
Please find debugging options in Developer Debug Mode > Inpaint:
Beta Was this translation helpful? Give feedback.
All reactions