Skip to content

Segmentation fault (only) with 13B model. #45

@WhoSayIn

Description

@WhoSayIn
~/alpaca# ./chat -m ggml-alpaca-13b-q4.bin
main: seed = 1679150968
llama_model_load: loading model from 'ggml-alpaca-13b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 10959.49 MB
Segmentation fault

I just downloaded the 13B model from the torrent (ggml-alpaca-13b-q4.bin), pulled the latest master and compiled. It works absolutely fine with the 7B model, but I just get the Segmentation fault with 13B model.

Checksum of the 13B model; 66f3554e700bd06104a4a5753e5f3b5b

I'm running Ubuntu under WSL on Windows.

Activity

barry163

barry163 commented on Mar 18, 2023

@barry163

I have the same result, I also ran it under WSL on windows, works with 7B model, not with 13B model. Same md5sum. Same result btw for ggerganov/llama.cpp, from which this project is forked. It gives a more detailed error message:

llama_model_load: llama_model_load: tensor 'tok_embeddings.weight' has wrong size in model file
main: failed to load model from 'ggml-alpaca-13b-q4.bin'

I did not use the 7B model from the torrent, but from the download url in this repo, did you do the same? Perhaps the 13B model has to be transformed to an appropriate format before it can be used in this project?

WhoSayIn

WhoSayIn commented on Mar 18, 2023

@WhoSayIn
Author

I have the same result, I also ran it under WSL on windows, works with 7B model, not with 13B model. Same md5sum. Same result btw for ggerganov/llama.cpp, from which this project is forked. It gives a more detailed error message:

llama_model_load: llama_model_load: tensor 'tok_embeddings.weight' has wrong size in model file
main: failed to load model from 'ggml-alpaca-13b-q4.bin'

I did not use the 7B model from the torrent, but from the download url in this repo, did you do the same? Perhaps the 13B model has to be transformed to an appropriate format before it can be used in this project?

Yes, I downloaded the 7B file from the direct link on the readme. Also used the magnet link on the readme for the 13B file. Haven’t seen anything about converting the 13B downloaded file.

PriNova

PriNova commented on Mar 18, 2023

@PriNova

It has nothing to do with converting. Main.cpp thinks this is a multi-part file. Usually the 13B model is splitted into two files. But here, we have only one file.
In the main.cpp file of the llama.cpp upstream I changed (hacked) the line number 130 into n_parts = 1; //LLAMA_N_PARTS.at(hparams.n_embd); which let me load the model.

antimatter15

antimatter15 commented on Mar 18, 2023

@antimatter15
Owner

Make sure to compile it again, using the latest version of the source code

WhoSayIn

WhoSayIn commented on Mar 18, 2023

@WhoSayIn
Author

@PriNova I tried that, changed the equivalent line on chat.cpp and compiled again, unfortunately didn't help :(

@antimatter15 I compiled the latest master, unfortunately didn't help :(

WhoSayIn

WhoSayIn commented on Mar 18, 2023

@WhoSayIn
Author

Well, just by doing some basic printf debugging, I can see the segfault is happening at this line; https://github.com/antimatter15/alpaca.cpp/blob/b64ca1c07cb4ff0637f48d85178b7a99ffd09d20/chat.cpp#LL254C22-L254C22

model.tok_embeddings = ggml_new_tensor_2d(ctx, wtype, n_embd, n_vocab);

No idea how to proceed from here though :(

kaz9112

kaz9112 commented on Mar 19, 2023

@kaz9112

Well, just by doing some basic printf debugging, I can see the segfault is happening at this line; https://github.com/antimatter15/alpaca.cpp/blob/b64ca1c07cb4ff0637f48d85178b7a99ffd09d20/chat.cpp#LL254C22-L254C22

model.tok_embeddings = ggml_new_tensor_2d(ctx, wtype, n_embd, n_vocab);

No idea how to proceed from here though :(

i used this
./chat -c 1024 -m ggml-alpaca-13b-q4.bin
it loaded for me, but it won't replied anything i asked, maybe it will work for you, idk...

progressionnetwork

progressionnetwork commented on Mar 19, 2023

@progressionnetwork

Try to provide a full path to model like ./chat -m D:/alpaca/13b/ggml-alpaca-13b-q4.bin

externvoid

externvoid commented on Mar 19, 2023

@externvoid

In my case, the ggml-alpaca-13b-q4.bin works. I referred to this tweet.
https://twitter.com/andy_matuschak/status/1636769182066053120

This includes several corrections in chat.cpp

JCharante

JCharante commented on Mar 20, 2023

@JCharante

Changing n_parts = 1 worked perfectly for me, my md5sum is the same as WhoSayIn. Keep in mind the latest commit in llama.cpp changed the model format, so I've been running the version of llama.cpp right before that change (git checkout 5cb63e2493c49bc2c3b9b355696e8dc26cdd0380)

james1236

james1236 commented on Mar 21, 2023

@james1236

Changing n_parts = 1 worked perfectly for me, my md5sum is the same as WhoSayIn. Keep in mind the latest commit in llama.cpp changed the model format, so I've been running the version of llama.cpp right before that change (git checkout 5cb63e2493c49bc2c3b9b355696e8dc26cdd0380)

How can you change that line if there isn't a main.cpp file in the alpaca.cpp folder

PriNova

PriNova commented on Mar 21, 2023

@PriNova

Changing n_parts = 1 worked perfectly for me, my md5sum is the same as WhoSayIn. Keep in mind the latest commit in llama.cpp changed the model format, so I've been running the version of llama.cpp right before that change (git checkout 5cb63e2493c49bc2c3b9b355696e8dc26cdd0380)

How can you change that line if there isn't a main.cpp file in the alpaca.cpp folder

It is already fixed in the chat.cpp file at line 34 in the model paramteres.

eshahrabani

eshahrabani commented on Mar 22, 2023

@eshahrabani

I fixed this issue by troubleshooting on my own machine!

I had the same issue running on WSL. The segmentation fault is due to not enough RAM. I have 32GB RAM and was able to run up to the 13B model but not the 30B under WSL. I tried building then running it under Windows and not WSL and it worked. Seems like WSL cannot use the page file properly at least for this project. 30B is slow but works for me now!

5 remaining items

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @antimatter15@WhoSayIn@externvoid@eshahrabani@JCharante

        Issue actions

          Segmentation fault (only) with 13B model. · Issue #45 · antimatter15/alpaca.cpp