Skip to content

[bug]: No valid config found #8119

@foxhoundunit

Description

@foxhoundunit

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

4070

GPU VRAM

12

Version number

5.15

Browser

Chrome

Python dependencies

No response

What happened

I installed Invoke via Stability Matrix. When I first started the program, it scanned my Models folder. Most of the models were imported without problems, but VAE, clips and some Lora caused error - No valid config found.
On the main page, when selecting the Flux model, dev or nf4, the VAE and clips are unavailable, red colored.
I also have a Juggernaut XL with included VAE that gets an error when trying to generate with it - OSError: [WinError 1314]

What you expected to happen

¯_(ツ)_/¯

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

Activity

psychedelicious

psychedelicious commented on Jun 23, 2025

@psychedelicious
Collaborator

Unfortunately this kind of issue is common with Stability Matrix. We don't have any control over how it installs Invoke and can't offer support to fix issues caused by SM.

We suggest using the Invoke launcher to install. You can find instructions on the latest releases page: https://github.com/invoke-ai/InvokeAI/releases/latest

If the problem persists when using our own launcher, please let me know and I'll re-open this issue and we can troubleshoot.

foxhoundunit

foxhoundunit commented on Jun 23, 2025

@foxhoundunit
Author

I just installed a standalone version. Same.

Image

psychedelicious

psychedelicious commented on Jun 23, 2025

@psychedelicious
Collaborator

Thank you for testing with the launcher and sharing the screenshot, which helps me to understand the issue. The error message isn't particularly clear, but it means Invoke doesn't recognize the models.

ae.safetensors is part of FLUX schnell (the VAE). I'm not sure what format the file is in on your machine, but you can install a compatible version via starter models:

Image

Then I see two quantized T5 encoders. It looks like these files are from Comfy, but maybe their format isn't compatible with Invoke.

The fp16 one has a comparable b16 quant in starter models (first in the list):

Image

Not sure about the other quant.

If you have LoRAs that don't load, you can help us improve support by posting a link to the LoRA in this issue: #7131

foxhoundunit

foxhoundunit commented on Jun 24, 2025

@foxhoundunit
Author

All models are in safetensors format. Everything works fine for me in Forge and ComfyUI in Stability Matrix. I use regular flux nf4v2 12gb or flux dev 23gb + fp16, clip_l, and ae. All with standard names. For some reason, there are problems with flux in invoke. Models are added, but clips and vae are not. I can't even generate an image using the nf4 model, in which everything is built-in, the button is inactive. I just tested my SDXL models with built-in vae in invoke, such as juggernautXL_ragnarokBy.safetensors, and they work great.

Brensom

Brensom commented on Jul 5, 2025

@Brensom

I've been experiencing this exact same problem for over six months now, and none of my attempts to fix it have worked. Debug mode doesn't provide any useful information. Whether I install manually or download models through InvokeAI's web interface, the issue persists. This completely blocks me from using many important features (Flux and others). It's extremely frustrating and I really need this fixed.

There appears to be an identical issue reported here: #6964
This seems to be a persistent problem that needs proper attention from the development team. The fact that these models work fine in other UIs like ComfyUI and Forge suggests this is specifically an InvokeAI compatibility issue that needs to be addressed.

Please prioritize fixing this - it's making the software unusable for certain workflows.

[2025-07-05 08:18:44,545]::[ModelInstallService]::INFO --> Model install started: /app/models/clip/laion/CLIP-ViT-g-14-laion2B-s34B-b88K/open_clip_model.safetensors
[2025-07-05 08:18:44,628]::[ModelInstallService]::ERROR --> Model install error: /app/models/clip/laion/CLIP-ViT-g-14-laion2B-s34B-b88K/open_clip_model.safetensors
InvalidModelConfigException: No valid config found
[2025-07-05 08:18:44,628]::[ModelInstallService]::INFO --> Model install started: /app/models/checkpoints/sd3.5/sd3.5-medium-vanilla
[2025-07-05 08:18:44,630]::[ModelInstallService]::ERROR --> Model install error: /app/models/checkpoints/sd3.5/sd3.5-medium-vanilla
InvalidModelConfigException: No valid config found
[2025-07-05 08:18:44,671]::[ModelInstallService]::INFO --> Model install started: /app/models/t5/t5_base_encoder/text_encoder_2
[2025-07-05 08:18:44,672]::[ModelInstallService]::ERROR --> Model install error: /app/models/t5/t5_base_encoder/text_encoder_2
InvalidModelConfigException: No valid config found

foxhoundunit

foxhoundunit commented on Jul 7, 2025

@foxhoundunit
Author

Am I correct in understanding that flux1-dev-bnb-nf4-v2.safetensors is absolutely identical to FLUX Dev (Quantized) with t5_bnb_int8_quantized_encoder, clip-vit-large-patch14, and FLUX.1-schnell_ae from Invoke's Flux starter pack? Because the images are completely identical.

For me, this is currently the only way to make Flux and Invoke work together, because even though I can add the Dev1 model (22Gb) to Invoke, the program cannot generate an image using t5, clip and ae from the starter pack — a Server Error appears.

As I said earlier, Invoke gives the error Model install error No valid config found when trying to add the necessary files for Dev - t5xxl_fp16.safetensors, clip_l.safetensors, and ae.safetensors manually and via huggingface

By the way, Kontext doesn't want to work too. The Fp8 model is added to Invoke, but when I try to generate an img, I get a Server Error
RuntimeError: Error(s) in loading state_dict for Flux: Unexpected key(s) in state_dict: “scaled_fp8”, “img_in.scale_input”, " img_in.scale_weight“, ”time_in.in_layer.scale_input“, ”time_in.in_layer.scale_weight“, ”time_in.out_layer.scale_input“, ”time_in.out_layer.scale_weight“, ”vector_in.in_layer.scale_input“, ”vector_in.in_layer.scale_weight" ,

JosNun

JosNun commented on Jul 10, 2025

@JosNun

For additional context, I'm on an M1 mac, and I don't have the memory to run the full T5 text encoder, and the bitsandbytes version (bnb fp8 quant for T5 that's in the starter pack) won't work because bitsandbytes doesn't support M1 yet. I'd love to be able to use a gguf quant

mikekay1

mikekay1 commented on Jul 11, 2025

@mikekay1

Fresh install I cannot use any of the models that I have 100% working in Forge/ComfyUI. commenting to see what the fix it, this is preventing me from using the app at all.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @psychedelicious@JosNun@Brensom@mikekay1@foxhoundunit

        Issue actions

          [bug]: No valid config found · Issue #8119 · invoke-ai/InvokeAI