Support tools for punctuation prediction for ASR output.
Three models are given or pointed to; a BERT-based Transformer, a seq2seq Transformer (both use PyTorch) and a bidirectional RNN (Punctuator 2, www.github.com/ottokart/punctuator2)
in Tensorflow 2.
Additionally, the code to preprocess text for the use of these models is given in the folder process
.
The BERT based transformer is a token classifying transformer from https://github.com/huggingface/transformers, used here for punctuation prediction.
The sequnece to sequence transformer comes from https://github.com/pytorch/fairseq and is based on the transformer described in the paper Attention is all you need.
All we provide here for the transformers are 1) data preprocessing scripts, to get the data on the right format for these models for the task of punctuation prediction, and 2) run files, where these models are trained for punctuation prediction.
- Python version >= 3.6
- An NVIDIA GPU and NCCL
- For HuggingFace's BERT based token classifier and the Fairseq sequence to sequence model: PyTorch version >= 1.4.0
- For Punctuator 2: TensorFlow 2.0
- For HuggingFace's transformer: seqeval and fastprogress as done below:
pip install seqeval
pip install git+https://github.com/fastai/fastprogress.git
Install Fairseq:
pip install fairseq
Installation with the HuggingFace submodule:
git clone --recurse-submodules https://github.com/cadia-lvl/punctuation-prediction
cd transformers
pip install .
The models for punctuation prediction are downloaded automatically with the pip package (you can get more information about the package in the folder punctuation_package
). Note that they are trained on Icelandic data. They are also directly accessible on the Clarin webpage.
MIT License
Copyright (c) 2020 Language and Voice Lab