-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Is there an existing issue for this problem?
- I have searched the existing issuesTo pick up a draggable item, press the space bar. While dragging, use the arrow keys to move the item. Press space again to drop the item in its new position, or press escape to cancel.
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
4070
GPU VRAM
12
Version number
5.15
Browser
Chrome
Python dependencies
No response
What happened
I installed Invoke via Stability Matrix. When I first started the program, it scanned my Models folder. Most of the models were imported without problems, but VAE, clips and some Lora caused error - No valid config found.
On the main page, when selecting the Flux model, dev or nf4, the VAE and clips are unavailable, red colored.
I also have a Juggernaut XL with included VAE that gets an error when trying to generate with it - OSError: [WinError 1314]
What you expected to happen
¯_(ツ)_/¯
How to reproduce the problem
No response
Additional context
No response
Discord username
No response
JosNun
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working
Type
Projects
Milestone
Relationships
Development
Select code repository
Activity
psychedelicious commentedon Jun 23, 2025
Unfortunately this kind of issue is common with Stability Matrix. We don't have any control over how it installs Invoke and can't offer support to fix issues caused by SM.
We suggest using the Invoke launcher to install. You can find instructions on the latest releases page: https://github.com/invoke-ai/InvokeAI/releases/latest
If the problem persists when using our own launcher, please let me know and I'll re-open this issue and we can troubleshoot.
foxhoundunit commentedon Jun 23, 2025
I just installed a standalone version. Same.
psychedelicious commentedon Jun 23, 2025
Thank you for testing with the launcher and sharing the screenshot, which helps me to understand the issue. The error message isn't particularly clear, but it means Invoke doesn't recognize the models.
ae.safetensors
is part of FLUX schnell (the VAE). I'm not sure what format the file is in on your machine, but you can install a compatible version via starter models:Then I see two quantized T5 encoders. It looks like these files are from Comfy, but maybe their format isn't compatible with Invoke.
The fp16 one has a comparable b16 quant in starter models (first in the list):
Not sure about the other quant.
If you have LoRAs that don't load, you can help us improve support by posting a link to the LoRA in this issue: #7131
foxhoundunit commentedon Jun 24, 2025
All models are in safetensors format. Everything works fine for me in Forge and ComfyUI in Stability Matrix. I use regular flux nf4v2 12gb or flux dev 23gb + fp16, clip_l, and ae. All with standard names. For some reason, there are problems with flux in invoke. Models are added, but clips and vae are not. I can't even generate an image using the nf4 model, in which everything is built-in, the button is inactive. I just tested my SDXL models with built-in vae in invoke, such as juggernautXL_ragnarokBy.safetensors, and they work great.
Brensom commentedon Jul 5, 2025
I've been experiencing this exact same problem for over six months now, and none of my attempts to fix it have worked. Debug mode doesn't provide any useful information. Whether I install manually or download models through InvokeAI's web interface, the issue persists. This completely blocks me from using many important features (Flux and others). It's extremely frustrating and I really need this fixed.
There appears to be an identical issue reported here: #6964
This seems to be a persistent problem that needs proper attention from the development team. The fact that these models work fine in other UIs like ComfyUI and Forge suggests this is specifically an InvokeAI compatibility issue that needs to be addressed.
Please prioritize fixing this - it's making the software unusable for certain workflows.
foxhoundunit commentedon Jul 7, 2025
Am I correct in understanding that flux1-dev-bnb-nf4-v2.safetensors is absolutely identical to FLUX Dev (Quantized) with t5_bnb_int8_quantized_encoder, clip-vit-large-patch14, and FLUX.1-schnell_ae from Invoke's Flux starter pack? Because the images are completely identical.
For me, this is currently the only way to make Flux and Invoke work together, because even though I can add the Dev1 model (22Gb) to Invoke, the program cannot generate an image using t5, clip and ae from the starter pack — a Server Error appears.
As I said earlier, Invoke gives the error Model install error No valid config found when trying to add the necessary files for Dev - t5xxl_fp16.safetensors, clip_l.safetensors, and ae.safetensors manually and via huggingface
By the way, Kontext doesn't want to work too. The Fp8 model is added to Invoke, but when I try to generate an img, I get a Server Error
RuntimeError: Error(s) in loading state_dict for Flux: Unexpected key(s) in state_dict: “scaled_fp8”, “img_in.scale_input”, " img_in.scale_weight“, ”time_in.in_layer.scale_input“, ”time_in.in_layer.scale_weight“, ”time_in.out_layer.scale_input“, ”time_in.out_layer.scale_weight“, ”vector_in.in_layer.scale_input“, ”vector_in.in_layer.scale_weight" ,
JosNun commentedon Jul 10, 2025
For additional context, I'm on an M1 mac, and I don't have the memory to run the full T5 text encoder, and the bitsandbytes version (bnb fp8 quant for T5 that's in the starter pack) won't work because bitsandbytes doesn't support M1 yet. I'd love to be able to use a gguf quant
mikekay1 commentedon Jul 11, 2025
Fresh install I cannot use any of the models that I have 100% working in Forge/ComfyUI. commenting to see what the fix it, this is preventing me from using the app at all.