[Update] About T2I-Adapters-XL #2099
Replies: 3 comments
-
thank you so much I shared on reddit , twitter , linkedin , civitAI and medium |
Beta Was this translation helpful? Give feedback.
0 replies
-
After playing a bit with the T2I Lineart adapters. I was able to use the pixel perfect mode if I used a lower control model weight (like 0.4). |
Beta Was this translation helpful? Give feedback.
0 replies
-
123 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Users have reported that the t2i-adapter-xl does not support
pixel-perfect
preprocessor. This is an announcement that when you use t2i-adapter-xl, thepixel-perfect
should be unchecked.What is “pixel-perfect”?
The lineart preprocessor will convert your image into a lineart. Lets say your image is 768x768 and you want to use SDXL to generate a 1024x1024 image.
If you check pixel-perfect, the image will be resized to 1024x1024 before computing the lineart, and the resolution of the lineart is 1024x1024.
If you uncheck pixel-perfect, the image will be resized to
preprocessor resolution
(by default is 512x512, this default number is shared by sd-webui-controlnet, comfyui, and diffusers) before computing the lineart, and the resolution of the lineart is 512x512. The 512x512 lineart will be stretched to a blurry 1024x1024 lineart for SDXL, losing many details.The “pixel-perfect” was important for controlnet 1.1 users to get accurate linearts without losing details.
Recently users reported that the new t2i-adapter-xl does not support (is not trained with) “pixel-perfect” images. Some examples are here:
Examples (uncheck “pixel-perfect”, the model works)
a brown dog on grass, winter, photo, high quality
Negative prompt: drawing, anime, low quality, distortion
Steps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 12345, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: lineart_realistic, Model: t2i-adapter_diffusers_xl_lineart [bae0efef], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Processor Res: 512, Guidance Start: 0, Guidance End: 1, Pixel Perfect: False, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0
Examples (check “pixel-perfect”, the model does not work)
a brown dog on grass, winter, photo, high quality
Negative prompt: drawing, anime, low quality, distortion
Steps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 12345, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: lineart_realistic, Model: t2i-adapter_diffusers_xl_lineart [bae0efef], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Processor Res: 512, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0
Tip 1
For t2i-adapter, uncheck pixel-perfect, use 512 as preprocessor resolution, and select balanced control mode.
Actually, this is already the default setting – you do not need to do anything if you just selected the model. However, many users have a habit to always check “pixel-perfect” rightly after selecting the models. This should be noticed when you use t2i-adapters-xl.
Tip 2
If things still do not work, try use 384 as
preprocessor resolution
. Some of those models marked with diffusers are trained with very low resolution. Like this:Below is a screenshot from T2I-Adapters-XL official demo:
Beta Was this translation helpful? Give feedback.
All reactions