Skip to content

Commit 668930a

Browse files
committed
readme : update build instructions
1 parent 762f63e commit 668930a

File tree

1 file changed

+19
-37
lines changed

1 file changed

+19
-37
lines changed

README.md

Lines changed: 19 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -89,10 +89,11 @@ Now build the [main](examples/main) example and transcribe an audio file like th
8989

9090
```bash
9191
# build the main example
92-
make -j
92+
cmake -B build
93+
cmake --build build --config Release
9394

9495
# transcribe an audio file
95-
./main -f samples/jfk.wav
96+
./build/bin/main -f samples/jfk.wav
9697
```
9798

9899
---
@@ -265,11 +266,12 @@ Here are the steps for creating and using a quantized model:
265266

266267
```bash
267268
# quantize a model with Q5_0 method
268-
make -j quantize
269-
./quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
269+
cmake -B build
270+
cmake --build build --config Release
271+
./build/bin/quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
270272

271273
# run the examples as usual, specifying the quantized model file
272-
./main -m models/ggml-base.en-q5_0.bin ./samples/gb0.wav
274+
./build/bin/main -m models/ggml-base.en-q5_0.bin ./samples/gb0.wav
273275
```
274276

275277
## Core ML support
@@ -303,10 +305,6 @@ speed-up - more than x3 faster compared with CPU-only execution. Here are the in
303305
- Build `whisper.cpp` with Core ML support:
304306

305307
```bash
306-
# using Makefile
307-
make clean
308-
WHISPER_COREML=1 make -j
309-
310308
# using CMake
311309
cmake -B build -DWHISPER_COREML=1
312310
cmake --build build -j --config Release
@@ -426,8 +424,8 @@ First, make sure you have installed `cuda`: https://developer.nvidia.com/cuda-do
426424
Now build `whisper.cpp` with CUDA support:
427425

428426
```
429-
make clean
430-
GGML_CUDA=1 make -j
427+
cmake -B build -DGGML_CUDA=1
428+
cmake --build build -j --config Release
431429
```
432430

433431
## Vulkan GPU support
@@ -436,8 +434,8 @@ First, make sure your graphics card driver provides support for Vulkan API.
436434

437435
Now build `whisper.cpp` with Vulkan support:
438436
```
439-
make clean
440-
make GGML_VULKAN=1 -j
437+
cmake -B build -DGGML_VULKAN=1
438+
cmake --build build -j --config Release
441439
```
442440

443441
## BLAS CPU support via OpenBLAS
@@ -448,28 +446,13 @@ First, make sure you have installed `openblas`: https://www.openblas.net/
448446
Now build `whisper.cpp` with OpenBLAS support:
449447

450448
```
451-
make clean
452-
GGML_OPENBLAS=1 make -j
453-
```
454-
455-
## BLAS CPU support via Intel MKL
456-
457-
Encoder processing can be accelerated on the CPU via the BLAS compatible interface of Intel's Math Kernel Library.
458-
First, make sure you have installed Intel's MKL runtime and development packages: https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html
459-
460-
Now build `whisper.cpp` with Intel MKL BLAS support:
461-
462-
```
463-
source /opt/intel/oneapi/setvars.sh
464-
mkdir build
465-
cd build
466-
cmake -DWHISPER_MKL=ON ..
467-
WHISPER_MKL=1 make -j
449+
cmake -B build -DGGML_BLAS=1
450+
cmake --build build -j --config Release
468451
```
469452

470453
## Ascend NPU support
471454

472-
Ascend NPU provides inference acceleration via [`CANN`](https://www.hiascend.com/en/software/cann) and AI cores.
455+
Ascend NPU provides inference acceleration via [`CANN`](https://www.hiascend.com/en/software/cann) and AI cores.
473456

474457
First, check if your Ascend NPU device is supported:
475458

@@ -483,10 +466,8 @@ Then, make sure you have installed [`CANN toolkit`](https://www.hiascend.com/en/
483466
Now build `whisper.cpp` with CANN support:
484467

485468
```
486-
mkdir build
487-
cd build
488-
cmake .. -D GGML_CANN=on
489-
make -j
469+
cmake -B build -DGGML_CANN=1
470+
cmake --build build -j --config Release
490471
```
491472

492473
Run the inference examples as usual, for example:
@@ -636,8 +617,9 @@ The [stream](examples/stream) tool samples the audio every half a second and run
636617
More info is available in [issue #10](https://github.com/ggerganov/whisper.cpp/issues/10).
637618

638619
```bash
639-
make stream -j
640-
./stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
620+
cmake -B build
621+
cmake --build build --config Release
622+
./build/bin/stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
641623
```
642624

643625
https://user-images.githubusercontent.com/1991296/194935793-76afede7-cfa8-48d8-a80f-28ba83be7d09.mp4

0 commit comments

Comments
 (0)