This project is an implementation of audio classification using Mel-Frequency Cepstral Coefficients (MFCC) and Neural Networks. Uncover impressive performance metrics and practical real-world applications of our audio classification project.
The dataset used for this project is the UrbanSound8K dataset, which contains 8732 labeled sound excerpts (<=4s) of urban sounds from 10 classes: air_conditioner, car_horn, children_playing, dog_bark, drilling, engine_idling, gun_shot, jackhammer, siren, and street_music. The classes are drawn from the urban sound taxonomy. The audio files are pre-sorted into ten folds (folders named fold1-fold10) to help in the reproduction of and comparison with the automatic classification results.
The dataset can be downloaded from Kaggle using the following command:
kaggle datasets download -d chrisfilo/urbansound8k
- Project Name
- Dataset Description
- Table of Contents
- frontend
- Project Demo
- Installation
- Usage
- Configuration
- Contributing)
This is the frontend part of the Audio Classification project. It consists of a Flask web application (app.py
) and an HTML template (index.html
) for audio classification.
This Flask web application serves as the backend for the audio classification. It loads the trained model and provides an endpoint for audio prediction.
- Install the required libraries using
pip
.
pip install flask tensorflow numpy librosa
- Save the trained model as
final_model1.h5
in the same directory asapp.py
.
- The Flask application loads the trained model using
tf.keras.models.load_model
. - It defines two routes -
/
for rendering the main page and/predict
for handling audio classification. - When a POST request is made to
/predict
, it reads the audio file from the request, extracts MFCC features, and performs audio classification using the loaded model. - The predicted class label is returned as the response.
This HTML template creates a simple web page for audio classification. Users can either drag and drop an audio file or select it using the file input. The predicted class label will be displayed, and the audio will be played on the page.
- The template uses CSS styles for styling and responsiveness.
- It contains a drop area where users can drag and drop the audio file or use the file input to upload.
- The JavaScript code handles the file upload, sends it to the backend for classification, and displays the predicted class label on the page.
- The predicted class label is shown in a result div, and the audio is played using the audio element.
Note: This frontend is designed to work with the backend code provided earlier. Make sure to run the Flask web application (app.py
) to use this frontend for audio classification.
Frontend code designed and developed by Harsh Gupta (Desperate Enuf).
Audio Alchemy : Audio Classification using MFCC and Neural Networks
- Clone the repository to your local machine.
git clone <https://github.com/harshgupta1810/Audio_Classification>
-
Download the UrbanSound8K dataset from Kaggle using the provided command.
-
Run the Jupyter notebook
audio_classification.ipynb
to preprocess the audio data, extract MFCC features, and create and train the neural network model. -
To use the trained model for prediction, load the saved model and pass an audio file through it. An example of doing this is given in the last section of the Jupyter Notebook.
If you wish to contribute to this project, please follow the guidelines below:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Commit your changes with descriptive commit messages.
- Push your changes to your forked repository.
- Create a pull request to the original repository.
Designed and developed by Harsh Gupta (Desperate Enuf).