Skip to content

Commit

Permalink
Updating README and doc strings to reflect that n_mels can now be 128 (
Browse files Browse the repository at this point in the history
  • Loading branch information
lvaughn authored Nov 26, 2024
1 parent 173ff7d commit fc5ded7
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ audio = whisper.load_audio("audio.mp3")
audio = whisper.pad_or_trim(audio)

# make log-Mel spectrogram and move to the same device as the model
mel = whisper.log_mel_spectrogram(audio).to(model.device)
mel = whisper.log_mel_spectrogram(audio, n_mels=model.dims.n_mels).to(model.device)

# detect the spoken language
_, probs = model.detect_language(mel)
Expand Down
4 changes: 2 additions & 2 deletions whisper/audio.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def log_mel_spectrogram(
The path to audio or either a NumPy array or Tensor containing the audio waveform in 16 kHz
n_mels: int
The number of Mel-frequency filters, only 80 is supported
The number of Mel-frequency filters, only 80 and 128 are supported
padding: int
Number of zero samples to pad to the right
Expand All @@ -132,7 +132,7 @@ def log_mel_spectrogram(
Returns
-------
torch.Tensor, shape = (80, n_frames)
torch.Tensor, shape = (n_mels, n_frames)
A Tensor that contains the Mel spectrogram
"""
if not torch.is_tensor(audio):
Expand Down

0 comments on commit fc5ded7

Please sign in to comment.