Skip to content

Generating garbage output on CUDA when GGML_CUDA_FORCE_DMMV is set to false #2136

Closed
@LostRuins

Description

@LostRuins

OS: Windows 10 LTSC 1809, using provided CU 11.7.1 runtimes from this repo's workflow.

I have a RTX 2060 card, and ever since #2067 was merged, my system generates garbage output with CuBLAS, if any GPU layers are offloaded. This does not happen if GGML_CUDA_FORCE_DMMV is set to true, or if 0 layers are offloaded.

Example output:

E:\LLaMA\llamacpp>main.exe -m E:\LLaMA\models\test_models\open-llama-3b-q4_0.bin -ngl 66 -p "Hello, my name is"
main: build = 800 (481f793)
main: seed  = 1688744741
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5
llama.cpp: loading model from E:\LLaMA\models\test_models\open-llama-3b-q4_0.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 3200
llama_model_load_internal: n_mult     = 216
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 26
llama_model_load_internal: n_rot      = 100
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 8640
llama_model_load_internal: model size = 3B
llama_model_load_internal: ggml ctx size =    0.06 MB
llama_model_load_internal: using CUDA for GPU acceleration
llama_model_load_internal: mem required  = 1078.99 MB (+  682.00 MB per state)
llama_model_load_internal: allocating batch_size x (512 kB + n_ctx x 128 B) = 288 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 26 repeating layers to GPU
llama_model_load_internal: offloading non-repeating layers to GPU
llama_model_load_internal: offloading v cache to GPU
llama_model_load_internal: offloading k cache to GPU
llama_model_load_internal: offloaded 29/29 layers to GPU
llama_model_load_internal: total VRAM used: 2754 MB
llama_new_context_with_model: kv self size  =  162.50 MB

system_info: n_threads = 6 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 512, n_batch = 512, n_predict = -1, n_keep = 0


 Hello, my name is Zahara and I ammanuel and this is my blog where I post my experiences as a travelerbola and a gamblerayam squeeze.
ISummary fromadhd dheg aad karakdhek e-mail addeold bhagkdg kdshs aad agkdg satraveleds kas ksms kdgt aada dhgk aadgk aadksh dhgk dhenkdg dhgs ksdfd aagdg agk aaagh aabkdhg agkdhgaadhg aaadhdght dgeekdg agkdhg aaagh aagkdhgi agkdg ksdsagdg aagkdhgi aabkhkdg aaagh aabkdgk dkdg aaagh aaahgk aabkhg aaaggkdg aabkdg aaadhgk aaagh aagkdgk dkeakdh ks

another attempt, with fewer layers:

E:\LLaMA\llamacpp>main.exe -m E:\LLaMA\models\test_models\open-llama-3b-q4_0.bin -ngl 20 -p "Hello, my name is"
main: build = 800 (481f793)
main: seed  = 1688745037
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5
llama.cpp: loading model from E:\LLaMA\models\test_models\open-llama-3b-q4_0.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 3200
llama_model_load_internal: n_mult     = 216
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 26
llama_model_load_internal: n_rot      = 100
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 8640
llama_model_load_internal: model size = 3B
llama_model_load_internal: ggml ctx size =    0.06 MB
llama_model_load_internal: using CUDA for GPU acceleration
llama_model_load_internal: mem required  = 1532.89 MB (+  682.00 MB per state)
llama_model_load_internal: allocating batch_size x (512 kB + n_ctx x 128 B) = 288 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 20 repeating layers to GPU
llama_model_load_internal: offloaded 20/29 layers to GPU
llama_model_load_internal: total VRAM used: 1618 MB
llama_new_context_with_model: kv self size  =  162.50 MB

system_info: n_threads = 6 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 512, n_batch = 512, n_predict = -1, n_keep = 0


 Hello, my name is Mary###############################################################################################################################################################################################################

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions