Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timestamps always maximum length when using Silero VAD #287

Open
polytr0pe opened this issue Sep 22, 2024 · 6 comments
Open

Timestamps always maximum length when using Silero VAD #287

polytr0pe opened this issue Sep 22, 2024 · 6 comments
Assignees
Labels
bug Something isn't working hallucination hallucination of the models
Milestone

Comments

@polytr0pe
Copy link

Transcription appears to be accurate, however the ending timestamps for each line are always set at the beginning timestamp of the next line, resulting in subtitles constantly displayed long after speech ends, e.g.:

37
00:09:44,419 --> 00:09:56,950
I can't solve the problem at this rate.

38
00:09:56,950 --> 00:10:07,269
What should I do?

39
00:10:07,269 --> 00:10:11,269
I'll take my time and look at it.
``
@polytr0pe polytr0pe added the bug Something isn't working label Sep 22, 2024
@jhj0517
Copy link
Owner

jhj0517 commented Sep 23, 2024

Hi, This seems to be default behavior of the whisper, not just VAD.

Regardless of whether each segment tries to start from the end of the previous segment (if possible), it should still captrue the correct timestamp.

If the longer segment is the problem, try setting min_speech_duration_ms and min_silence_duration_ms to 250ms and turn on the BGM separation filter.

@jhj0517 jhj0517 added hallucination hallucination of the models and removed bug Something isn't working labels Sep 23, 2024
@jhj0517 jhj0517 self-assigned this Sep 30, 2024
@jhj0517
Copy link
Owner

jhj0517 commented Oct 9, 2024

Just reproduced hallucinations on a sample.

This happens under all the large-v2 models. With large-v3 models I got better clipped timestamps.

+) And I observed the same hallucinations with faster-whisper as well, and it causes the same in the WebUI, because I followed the same way with faster-whisper. A different implementation of the vad is needed.

@jhj0517 jhj0517 added this to the vad milestone Oct 28, 2024
@jhj0517 jhj0517 linked a pull request Oct 28, 2024 that will close this issue
@jhj0517 jhj0517 removed this from the vad milestone Oct 28, 2024
@jhj0517 jhj0517 removed a link to a pull request Oct 30, 2024
@montvid
Copy link

montvid commented Nov 12, 2024

pyannote have their own VAD as I understand - maybe one could use it?
From https://github.com/m-bain/whisperX

Valuable VAD & Diarization Models from [pyannote audio][https://github.com/pyannote/pyannote-audio]

From https://github.com/shashikg/WhisperS2T

NVIDIA NeMo Team: Thanks to the NVIDIA NeMo Team for their contribution of the open-source VAD model used in this pipeline.

@jhj0517 jhj0517 added the bug Something isn't working label Nov 14, 2024
@montvid
Copy link

montvid commented Nov 14, 2024

a fix was merged for faster whisper - maybe that solves the problem? SYSTRAN/faster-whisper#921

@jhj0517
Copy link
Owner

jhj0517 commented Nov 14, 2024

SYSTRAN/faster-whisper#921

Thanks, but it might not be about improving this, it's probably about fixing the batch transcription bug.

I'm considering implementing whisperX's and my own implementation for the VAD and using the better one after comparing them.

I don't have time for this right now, hope I can do it as soon as possible. Or any PR for this would be welcome.

@montvid
Copy link

montvid commented Nov 15, 2024

Faster-whisper programmers say it is not a VAD issue maybe here:
[ After using VAD, the start and end times of the recognized segments are incorrect #1119 ]SYSTRAN/faster-whisper#1119

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working hallucination hallucination of the models
Projects
None yet
Development

No branches or pull requests

3 participants