Releases: Purfview/whisper-standalone-win
Faster-Whisper-XXL r194.1
Standalone Faster-Whisper implementation using optimized CTranslate2 models.
Includes all Standalone Faster-Whisper features + some additional ones. Read here.
Faster-Whisper-XXL releases include all needed libs.
Some new stuff in r194.1:
New feature: CUDA support for pyannote_v3
and pyannote_onnx_v3
VADs
New feature: Selective output formats, for example: --output_format srt json
New feature: --diarize
will auto-activate --sentence
and all output formats will be affected except json
New --diarize
options: pyannote_v3.0
, pyannote_v3.1
, reverb_v1
, reverb_v2
New diarization args: --num_speakers
, --min_speakers
, --max_speakers
--diarize_dump
Bugfix: Bug in alternative subtitle writer with max_line_width [r189.1].
Bugfix: Bug in --sentence subtitle writer if "word" is space.
Change: .cache dir is bound to exe dir instead of cwd.
Link to the changelog.
Faster-Whisper r192.3
Standalone Faster-Whisper implementation using optimized CTranslate2 models.
GPU execution requires cuBLAS and cuDNN 8.x libs for CUDA v11.x .
Last included commit: #192
Some new stuff in r192.3:
Bugfix: 'one_word' was broken in r192.2.
Link to the changelog.
Whisper-OpenAI r136
cuBLAS and cuDNN
Place libs in the same folder where Faster-Whisper executable is. Or to:
Windows: To System32
dir.
Linux: To dir in LD_LIBRARY_PATH
env.
.7z
vs .zip
- archives contain same files.
CUDA11_v2
is the last with support for GPUs with Kepler chip.
CUDA11_v2: - cuBLAS.and.cuDNN____v11.11.3.6__v8.7.0.84
CUDA11_v3: - cuBLAS.and.cuDNN____v11.11.3.6__v8.9.6.50
CUDA11_v4: - cuBLAS.and.cuDNN____v11.11.3.6__v8.9.7.29
CUDA12_v1: - cuBLAS.and.cuDNN____v12.4.5.8___v8.9.7.29