I have an issue when loading a Qwen chat model with LlamaCpp #917
Abdechakour-Mc
started this conversation in
General
Replies: 2 comments 2 replies
-
Same here for llama 3.1. Any solutions? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Here is a working fix for the Transformer variants. You can load any model and its chat template with this. Hope this helps :) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm trying to load this model
codeqwen-1_5-7b-chat-q8_0.gguf
usingmodels.LlamaCpp
but it throughs a User warning as shown below, I tried to load the model using the native Llama.cpp and it worked just fine!here is the used code:
Beta Was this translation helpful? Give feedback.
All reactions