Skip to content

Fix generations that trigger LoRA loads not applying text encoders of the just loaded LoRAs on themselves #4005

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

Trojaner
Copy link
Contributor

@Trojaner Trojaner commented Jun 29, 2025

The text encoder parts of LoRAs are not applied correctly on the image generation that triggered the same LoRAs to load. This results in the first usage of a LoRA to never use the LoRA fully, as only the UNET part will be applied.

The following two methods are two different ways of triggering the same bug (both methods assume all generation parameters and the seed stay the same):

METHOD 1

  1. Load an SD1.5 model.
    - Disable everything "special" e.g. any quantization, model compile, hidiffusion, PAG, any caching (fastercache, teacache etc.), FreeU, TOME... you name it.
    - Make sure nothing is checked under "LoRA" in System -> Networks.
    - Likewise, nothing should be checked under System -> Text Encoder.
  2. Generate an image with some LoRAs that especially heavily rely on triggerwords and use these triggerwords too.
  3. Go to System -> Text encoder. Change the current prompt attention parser to any other prompt attention parser.
  4. Generate the same image again, ignore the result.
  5. Go back to System -> Text encoder. Restore the previous prompt attention parser again.
  6. Generate the same image once again and compare the result to the first generation from step 2.
    - Despite everything being exactly the same, the image is drastically different and the LoRAs seem to work much better now.

METHOD 2

  1. Load an SD1.5 model with the same conditions as from Method 1.
  2. Generate an image also as described in Method 1.
  3. Append a single letter anywhere in the prompt and generate the image again.
    - The output will change significantly. The result will be almost exactly the same as with the "fixed" result from Method 1 step 6 (aside minor changes from the added letter and maybe missing determinism).
  4. Remove the added letter again.
    - The image changes significantly again and falls back to generating the exact same image as from step 2, again not generating what would be expected if the LoRAs and triggerwords had been applied correctly.

TL;DR

The order of these two is wrong:

chrome_9ACICyTQZW

This is confirmed by:

  • Method 1 shows that the prompt attention parser has to change followed by at least one generation to fix this bug. This fix works because changing the prompt attention parser invalidates the offending prompt embeds cache in the next generation.
  • Method 2 shows that the addition of a single letter to create a new -and correct- prompt embeds cache (as the LoRAs are already loaded now when the cache is created) will result in a generation as it would be expected for the given LoRAs and triggerwords. Removing the added letter again results in a fallback to generating wrong outputs. This proves that the bug is indeed caused by a wrongly created prompt embeds cache as these wrong outputs would be expected when the offending cache is reused again this way.

WORKAROUNDS

One workaround is to set "Text encoder cache size" in System -> Text Encoder to 0.
It will still generate the first image wrong, but one can now just generate the same image again (without any changes) to fix it.

@Trojaner Trojaner changed the title Fix invalid prompt caching on lora load Fix generations that trigger LoRA loads not applying text encoders of the just loaded LoRAs on themselves Jun 29, 2025
@Trojaner Trojaner changed the base branch from master to dev June 29, 2025 19:40
@vladmandic
Copy link
Owner

good analysis and a simple fix - handle lora before handling text encoder.

@vladmandic vladmandic merged commit 646dad2 into vladmandic:dev Jun 30, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants