Description
Describe the bug
llama.cpp: loading model from models\Chinese-LLaMA-7B\ggml-model-q4_0.bin
error loading model: unknown (magic, version) combination: 67676a74, 00000002; is this really a GGML file?
llama_init_from_file: failed to load model
Traceback (most recent call last):
File “G:\Soft\text-generation-webui[server.py](http://server.py/)”, line 67, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name)
File “G:\Soft\text-generation-webui\modules[models.py](http://models.py/)”, line 142, in load_model
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
File “G:\Soft\text-generation-webui\modules\llamacpp_model.py”, line 32, in from_pretrained
self.model = Llama(**params)
File “G:\Soft\text-generation-webui\python310\lib\site-packages\llama_cpp[llama.py](http://llama.py/)”, line 148, in init
assert self.ctx is not None
AssertionError
Is there an existing issue for this?
- I have searched the existing issues
Reproduction
Cannot be used after updating the version
Screenshot
No response
Logs
llama.cpp: loading model from models\Chinese-LLaMA-7B\ggml-model-q4_0.bin
error loading model: unknown (magic, version) combination: 67676a74, 00000002; is this really a GGML file?
llama_init_from_file: failed to load model
Traceback (most recent call last):
File “G:\Soft\text-generation-webui\server.py”, line 67, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name)
File “G:\Soft\text-generation-webui\modules\models.py”, line 142, in load_model
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
File “G:\Soft\text-generation-webui\modules\llamacpp_model.py”, line 32, in from_pretrained
self.model = Llama(**params)
File “G:\Soft\text-generation-webui\python310\lib\site-packages\llama_cpp\llama.py”, line 148, in init
assert self.ctx is not None
AssertionError
System Info
win11