-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove torch
dependency, Faster numpy Feature extraction
#1106
Conversation
Mentioning community reviewers, feel free to ignore this |
Hi,
|
I have a Another main bottleneck here is the padding with 30 seconds.
But essentially the effect of padding the input is padding the output with some constant vector. So one could mitigate this doubling by just padding the stft/mel outputs cleverly. We lose almost the same amount of time as STFT to |
I think doing this on GPU with torch is a bit overkill as feature extraction is seldom the bottleneck in the whisper pipeline. Simplicity, dependency management is more important here. And when things move towards numpy2, it'll be not that bad. |
This looks interesting as well, should be quite fast |
I updated the figures along with the benchmarking script and limiting it to 4 cores for reproducability
We're removing torch that's for sure, the choice here is between both cupy and numpy or one of them
This implementation is ported from C++ pytorch implementation, it's very similar to the librosa implementation but has some extra functionality that I'm not sure we need here, the reason I went with this is that the librosa implementation needs an extra dependency, and copying the function over isn't simple because it uses many librosa internal functions that will be a massive increase to the amount of code needed
I'm working on removing the padding completely while still generating identical results but I'm leaving it for another PR, Check here it does reduce the FE time to half indeed
I have no idea how to calculate |
Thanks for your work @MahmoudAshraf97, it's really impressive. My thinking is inline with @ozancaglayan "Simplicity, dependency management is more important here. And when things move towards numpy2, it'll be not that bad." All implementations are good, depend on what you optimize for. I will optimize for simplicity and less dependencies. |
Interesting. I'm always confused about one thing. Given that whisper is trained with 30 secs segments, the above PR and your thinking is not about running Whisper freely on arbitrarily sized audio files but rather removing the I thought that to do proper seeking and timestamping, Whisper requires that +30 secs of zeros all the time but isn't that the case then? i.e. would the output of Whisper be exactly the same if we just pad the inputs to 30 secs at most? |
We don't even need to pad it to 30s, adding Edit: it produces exactly the same encoder outputs for NumPy, while CuPy needs atleast 17s of zero padding to achieve the same numerical accuracy for audios less than 30s, that closes the speed gap even more between the two |
torch
dependency, use CuPy for feature extractiontorch
dependency, Faster numpy Feature extraction
I implemented the reduced padding in bc86503, everyone feel free to test it and report if there are any issues |
Thanks! I wont probably have time next week as I'll be on annual leave. One thing though, does this mean that you'll no longer be padding let's say an audio file of 10 seconds to 30 seconds? People were saying that Whisper has a severe hallucination problem if you let it transcribe signals less than 30 seconds because its not trained with an attention mask. There are even papers which finetunes it to mitigate this. But I'm not 100% sure whether its the same thing or not, need to test it properly. |
In the original whisper implementation, the features are padded with 30s of zeros, converted to log mel spectrogram, and then the features that correspond to the padding are removed, and the features are then padded again with zeros until they are equivalent to 30s. What I did here is that I found that padding with 10ms instead of 30s is mathematically equivalent and produces the same encoder input and output within the numerical tolerance of the data type used. Why does this works? because in the STFT calculation, each window is independent and has an overlap of |
I want to merge this before Friday so we can release a new version at 15th of November, so if anyone has any comments or reviews please let me know |
ed85116
to
8198307
Compare
…YSTRAN#1106)" This reverts commit 3e0ba86.
It is terrific to see that the community feedback worked and torch dependencies are gone with such PRs, bringing the faster-whisper back to its origin @MahmoudAshraf97. But one question; have you tested your new implementation with short chunks? I have tested the new feature extractor and it produces no transcripts when chunks are shorter than 30 secs. Pytorch implementation works fine and produces the correct results. I see your discussions with @ozancaglayan and others about deprioritizing short-chunk processing. But we should not break an existing and expected feature. |
This implementation should produce identical results to the old and original whisper implementation and it's completely backwards compatible, I'm not sure I understand your description of the problem, can you write code to reproduce it? |
I'm processing massive amount of short 1-3 second chunks, and until now I passed torch array to Transcribe to skip the slow audio decoding. Now since I cannot pass torch array anymore, passing numpy array, but that and moving FTT to numpy from torch implementation (and moving load from GPU to CPU) slowed down the whole transcribing by a lot on my side. So maybe use_pytorch boolean flag somewhere would be better to keep pytorch optionally available for audio input and for feature extraction. |
And just to emphasize, removing torch array as input type for transcribe function was a breaking change. |
@formater Removing the torch array is required to keep faster-whisper out of torch dependency and it is not a breaking change as the latest release of faster-whisper (1.0.3) has an interface with numpy array. This PR is important to keep it as it is. I think putting a boolean flag will make torch processing optional but not torch dependency optional, so your proposed solution is not applicable in the context of the PR. One option might be making FeatureExtractor an injectable component via an interface. So people can plug in their FeatureExtractor implementation with their package dependency if they do not like the default FeatureExtractor implementation provided by the faster-whisper. |
The injectable component is a good idea! :) I wrote that it was a breaking change for me, because I used torch array as transcriber input, which worked with 1.0.3, and not works anymore with latest :) |
@MahmoudAshraf97 I had used the new feature extractor with the 1.0.3 version as I do see that it has the same interface. My tests were based on that. When I switched the implementation to the new numpy version, the transcripts for 1-3 second size audio chunks are gone. I am using numpy array as input, instead of filename. So this might help you reproduce the issue. Let me try to give you more details on my experiment. |
@formater if your use case is just audios shorter than 30s, then using faster whisper is very inefficient, use CTranslate2 directly or TRT. |
@MahmoudAshraf97 When I run with the PyTorch or 1.0.3 version, I get an "80x3084" matrix with float32 data type when inputting data with 7039 bytes. The new FeatureExtractor extracts an "80x49" matrix from the same input, which is pretty much wrong in dimension size and order. This will result in two things;
You can reproduce a similar scenario by providing nd-array through the faster-whisper API. I think this version should not be tagged as 1.10.0 even if it does have degraded accuracy and core faults at the core feature. Let me open an issue for it. |
The new feature extractor is expected to produce an array with smaller dimension, the new dimensions are handled in |
Yes @MahmoudAshraf97, I have tested it with the 1.1.0 before opening issue for it. It seems to be running ok with 1.1.0. There should have been a transformation stuff i have missed in the PR when using it in 1.0.3. |
If you are getting this problem using 1.1.0, please open an issue with the previous and the current transcription, and share the audios to reproduce and I'll happily investigate it |
But @MahmoudAshraf97, I see a strange performance difference here; faster-whisper 1.0.3 + pytorch CPU feature extractor => works and it is fast, %10 faster than 1.1.0 version in CPU. Normally pytorch feature extractor is expected to improve the performance compared to the numpy implementation in CPU as tested in 1.0.3. But I guess due to the changes in the intermediate steps, it causes a 2x slow down in 1.1.0. That might be due to some sizing updates and so. |
it all resolves down to using the correct value of |
I think the considerable effect lies in subtracting constant 1 instead of self.feature_extractor.nb_max_frames when determining content_frames. |
@MahmoudAshraf97 I understand it more now, in the latest implementation, the padding is just not done according to the sample size but very strict to avoid performance due to that. I have updated the padding stuff accordingly in the pytorch implementation and it performed similarly. |
One comment here though, even if we use the same feature extractors in 1.0.3 and 1.1.0, there is a slight difference in performance which requires to be regressed. Version 1.1.0 seems to be performing %6-7 slower compared to 1.0.3, everything is kept the same. Great work for removing the pytorch dependencies, @MahmoudAshraf97 |
I received similar report in #1167 , but haven't dived deep into it as it's probably related to CT2 more than faster whisper |
@MahmoudAshraf97 I should be better to dive in. Thanks for that. But I think it should not be related to the CT2 version or others, as in my tests, I kept the CT2 version the same as 4.5.0. |
Hi all,
This PR aims to remove the torch dependency as it's only used for feature extraction, there are 2 options:
These are performance figures on a 30s segment using this script
edit: Decided to remove CuPy as the speed difference is not worth the extra code