forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 890
Closed
Description
~/alpaca# ./chat -m ggml-alpaca-13b-q4.bin
main: seed = 1679150968
llama_model_load: loading model from 'ggml-alpaca-13b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 10959.49 MB
Segmentation fault
I just downloaded the 13B model from the torrent (ggml-alpaca-13b-q4.bin
), pulled the latest master and compiled. It works absolutely fine with the 7B model, but I just get the Segmentation fault with 13B model.
Checksum of the 13B model; 66f3554e700bd06104a4a5753e5f3b5b
I'm running Ubuntu under WSL on Windows.
aleksandervalle and LudwigStumpplin72h, kaz9112 and james1236
Activity
barry163 commentedon Mar 18, 2023
I have the same result, I also ran it under WSL on windows, works with 7B model, not with 13B model. Same md5sum. Same result btw for ggerganov/llama.cpp, from which this project is forked. It gives a more detailed error message:
I did not use the 7B model from the torrent, but from the download url in this repo, did you do the same? Perhaps the 13B model has to be transformed to an appropriate format before it can be used in this project?
WhoSayIn commentedon Mar 18, 2023
Yes, I downloaded the 7B file from the direct link on the readme. Also used the magnet link on the readme for the 13B file. Haven’t seen anything about converting the 13B downloaded file.
PriNova commentedon Mar 18, 2023
It has nothing to do with converting. Main.cpp thinks this is a multi-part file. Usually the 13B model is splitted into two files. But here, we have only one file.
In the main.cpp file of the llama.cpp upstream I changed (hacked) the line number 130 into
n_parts = 1; //LLAMA_N_PARTS.at(hparams.n_embd);
which let me load the model.antimatter15 commentedon Mar 18, 2023
Make sure to compile it again, using the latest version of the source code
WhoSayIn commentedon Mar 18, 2023
@PriNova I tried that, changed the equivalent line on chat.cpp and compiled again, unfortunately didn't help :(
@antimatter15 I compiled the latest master, unfortunately didn't help :(
WhoSayIn commentedon Mar 18, 2023
Well, just by doing some basic
printf
debugging, I can see the segfault is happening at this line; https://github.com/antimatter15/alpaca.cpp/blob/b64ca1c07cb4ff0637f48d85178b7a99ffd09d20/chat.cpp#LL254C22-L254C22model.tok_embeddings = ggml_new_tensor_2d(ctx, wtype, n_embd, n_vocab);
No idea how to proceed from here though :(
kaz9112 commentedon Mar 19, 2023
i used this
./chat -c 1024 -m ggml-alpaca-13b-q4.bin
it loaded for me, but it won't replied anything i asked, maybe it will work for you, idk...
progressionnetwork commentedon Mar 19, 2023
Try to provide a full path to model like ./chat -m D:/alpaca/13b/ggml-alpaca-13b-q4.bin
externvoid commentedon Mar 19, 2023
In my case, the ggml-alpaca-13b-q4.bin works. I referred to this tweet.
https://twitter.com/andy_matuschak/status/1636769182066053120
This includes several corrections in chat.cpp
JCharante commentedon Mar 20, 2023
Changing
n_parts = 1
worked perfectly for me, my md5sum is the same as WhoSayIn. Keep in mind the latest commit in llama.cpp changed the model format, so I've been running the version of llama.cpp right before that change (git checkout 5cb63e2493c49bc2c3b9b355696e8dc26cdd0380
)james1236 commentedon Mar 21, 2023
How can you change that line if there isn't a main.cpp file in the alpaca.cpp folder
PriNova commentedon Mar 21, 2023
It is already fixed in the chat.cpp file at line 34 in the model paramteres.
eshahrabani commentedon Mar 22, 2023
I fixed this issue by troubleshooting on my own machine!
I had the same issue running on WSL. The segmentation fault is due to not enough RAM. I have 32GB RAM and was able to run up to the 13B model but not the 30B under WSL. I tried building then running it under Windows and not WSL and it worked. Seems like WSL cannot use the page file properly at least for this project. 30B is slow but works for me now!
5 remaining items