You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -303,10 +305,6 @@ speed-up - more than x3 faster compared with CPU-only execution. Here are the in
303
305
- Build `whisper.cpp` with Core ML support:
304
306
305
307
```bash
306
-
# using Makefile
307
-
make clean
308
-
WHISPER_COREML=1 make -j
309
-
310
308
# using CMake
311
309
cmake -B build -DWHISPER_COREML=1
312
310
cmake --build build -j --config Release
@@ -426,8 +424,8 @@ First, make sure you have installed `cuda`: https://developer.nvidia.com/cuda-do
426
424
Now build `whisper.cpp` with CUDA support:
427
425
428
426
```
429
-
make clean
430
-
GGML_CUDA=1 make -j
427
+
cmake -B build -DGGML_CUDA=1
428
+
cmake --build build -j --config Release
431
429
```
432
430
433
431
## Vulkan GPU support
@@ -436,8 +434,8 @@ First, make sure your graphics card driver provides support for Vulkan API.
436
434
437
435
Now build `whisper.cpp` with Vulkan support:
438
436
```
439
-
make clean
440
-
make GGML_VULKAN=1 -j
437
+
cmake -B build -DGGML_VULKAN=1
438
+
cmake --build build -j --config Release
441
439
```
442
440
443
441
## BLAS CPU support via OpenBLAS
@@ -448,28 +446,13 @@ First, make sure you have installed `openblas`: https://www.openblas.net/
448
446
Now build `whisper.cpp` with OpenBLAS support:
449
447
450
448
```
451
-
make clean
452
-
GGML_OPENBLAS=1 make -j
453
-
```
454
-
455
-
## BLAS CPU support via Intel MKL
456
-
457
-
Encoder processing can be accelerated on the CPU via the BLAS compatible interface of Intel's Math Kernel Library.
458
-
First, make sure you have installed Intel's MKL runtime and development packages: https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html
459
-
460
-
Now build `whisper.cpp` with Intel MKL BLAS support:
461
-
462
-
```
463
-
source /opt/intel/oneapi/setvars.sh
464
-
mkdir build
465
-
cd build
466
-
cmake -DWHISPER_MKL=ON ..
467
-
WHISPER_MKL=1 make -j
449
+
cmake -B build -DGGML_BLAS=1
450
+
cmake --build build -j --config Release
468
451
```
469
452
470
453
## Ascend NPU support
471
454
472
-
Ascend NPU provides inference acceleration via [`CANN`](https://www.hiascend.com/en/software/cann) and AI cores.
455
+
Ascend NPU provides inference acceleration via [`CANN`](https://www.hiascend.com/en/software/cann) and AI cores.
473
456
474
457
First, check if your Ascend NPU device is supported:
475
458
@@ -483,10 +466,8 @@ Then, make sure you have installed [`CANN toolkit`](https://www.hiascend.com/en/
483
466
Now build `whisper.cpp` with CANN support:
484
467
485
468
```
486
-
mkdir build
487
-
cd build
488
-
cmake .. -D GGML_CANN=on
489
-
make -j
469
+
cmake -B build -DGGML_CANN=1
470
+
cmake --build build -j --config Release
490
471
```
491
472
492
473
Run the inference examples as usual, for example:
@@ -636,8 +617,9 @@ The [stream](examples/stream) tool samples the audio every half a second and run
636
617
More info is available in [issue #10](https://github.com/ggerganov/whisper.cpp/issues/10).
0 commit comments