This project uses a pre trained speech embedding model for speaker recognition using TensorFlow 2.4 and TensorFlow 2.5. The model achieves over 98% accuracy in identifying whether two speech samples belong to the same person without storing any personal voice samples.
Voice-Recognition.mp4
-
Clone the repository:
git clone https://github.com/NikhilKalloli/Voice-Recognition.git
-
Navigate to the directory:
cd Voice-Recognition
-
Create a Virtual Environment with python version 3.8
py -3.8 -m venv venv
If you don't have python 3.8 installed, you can download it from here
-
Install the required dependencies(in a virtual Environment):
pip install -r requirements.txt
-
Run the Streamlit app:
streamlit run streamlit_app.py
This speech embedding model architecture utilizes a stack of two LSTM layers following the computation of the mel-frequency spectrogram from the input audio data in the time domain. The model architecture includes a Dense layer that outputs the final audio embeddings.The total number of parameters is 641k.
In this work, the Mozilla Common Voice dataset was used. It contains a large amount of voice samples, including client_id, the audio file, the sentence that was spoken and some features about the speaker (age, gender, etc).
Training involves preprocessing audio samples, utilizing triplet loss, and training on a large dataset.
This model enables identity verification from speech, offering applications in audio diarization, identity verification, and transfer learning.
Contributions are welcome! If you have any improvements or new features to suggest, please create a pull request. If you have any questions or issues, feel free to open an issue.