Skip to content

Fix references to download-ggml-model.sh #2427

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 24, 2024
Merged

Conversation

WhyNotHugo
Copy link
Contributor

The script itself has a hashbang indicating that it is a shell script, but the README indicates that it must be executed with bash.

I checked the script itself, and it seems to be valid POSIX shell. I can confirm that it works with busybox sh.

Clarify the reference on the README, so it is clear that bash is not actually a dependency for this script.

The script itself has a hashbang indicating that it is a shell script,
but the README indicates that it must be executed with `bash`.

I checked the script itself, and it seems to be valid POSIX shell. I can
confirm that it works with busybox sh.

Clarify the reference on the README, so it is clear that bash is not
actually a dependency for this script.
@ggerganov ggerganov merged commit 0d2e2ae into ggml-org:master Sep 24, 2024
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Sep 26, 2024
* ggerganov/master: (73 commits)
  ci : disable failing CUDA and Java builds
  readme : fix references to download-ggml-model.sh (ggml-org#2427)
  make : remove "talk" target until updated
  ggml : add ggml-cpu-impl.h (skip) (#0)
  sync : ggml
  talk-llama : sync llama.cpp
  ggml : add AVX512DQ requirement for AVX512 builds (llama/9622)
  log : add CONT level for continuing previous log entry (llama/9610)
  threads: fix msvc build without openmp (llama/9615)
  cuda: add q8_0->f32 cpy operation (llama/9571)
  threads: improve ggml_barrier scaling with large number of threads (llama/9598)
  ggml : AVX512 gemm for Q4_0_8_8 (llama/9532)
  metal : use F32 prec for K*Q in vec FA (llama/9595)
  Revert "[SYCL] fallback mmvq (ggml/9088)" (llama/9579)
  musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) (llama/9526)
  Fix merge error in #9454 (llama/9589)
  CUDA: enable Gemma FA for HIP/Pascal (llama/9581)
  RWKV v6: RWKV_WKV op CUDA implementation (llama/9454)
  ggml-alloc : fix list of allocated tensors with GGML_ALLOCATOR_DEBUG (llama/9573)
  Update CUDA graph on scale change plus clear nodes/params (llama/9550)
  ...
lyapple2008 pushed a commit to lyapple2008/whisper.cpp.mars that referenced this pull request Nov 2, 2024
The script itself has a hashbang indicating that it is a shell script,
but the README indicates that it must be executed with `bash`.

I checked the script itself, and it seems to be valid POSIX shell. I can
confirm that it works with busybox sh.

Clarify the reference on the README, so it is clear that bash is not
actually a dependency for this script.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants