Skip to content

This project is an implementation of audio classification using Mel-Frequency Cepstral Coefficients (MFCC) and Neural Networks

Notifications You must be signed in to change notification settings

harshgupta1810/Audio_Classification-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Audio Alchemy: Audio Classification using MFCC and Neural Networks

This project is an implementation of audio classification using Mel-Frequency Cepstral Coefficients (MFCC) and Neural Networks. Uncover impressive performance metrics and practical real-world applications of our audio classification project.

Dataset Description

The dataset used for this project is the UrbanSound8K dataset, which contains 8732 labeled sound excerpts (<=4s) of urban sounds from 10 classes: air_conditioner, car_horn, children_playing, dog_bark, drilling, engine_idling, gun_shot, jackhammer, siren, and street_music. The classes are drawn from the urban sound taxonomy. The audio files are pre-sorted into ten folds (folders named fold1-fold10) to help in the reproduction of and comparison with the automatic classification results.

Dataset Source

The dataset can be downloaded from Kaggle using the following command:

kaggle datasets download -d chrisfilo/urbansound8k

Table of Contents

  1. Project Name
  2. Dataset Description
  3. Table of Contents
  4. frontend
  5. Project Demo
  6. Installation
  7. Usage
  8. Configuration
  9. Contributing)

Frontend for Audio Classification

This is the frontend part of the Audio Classification project. It consists of a Flask web application (app.py) and an HTML template (index.html) for audio classification.

app.py

This Flask web application serves as the backend for the audio classification. It loads the trained model and provides an endpoint for audio prediction.

Project Setup

  1. Install the required libraries using pip.
pip install flask tensorflow numpy librosa
  1. Save the trained model as final_model1.h5 in the same directory as app.py.

Code Explanation

  • The Flask application loads the trained model using tf.keras.models.load_model.
  • It defines two routes - / for rendering the main page and /predict for handling audio classification.
  • When a POST request is made to /predict, it reads the audio file from the request, extracts MFCC features, and performs audio classification using the loaded model.
  • The predicted class label is returned as the response.

index.html

This HTML template creates a simple web page for audio classification. Users can either drag and drop an audio file or select it using the file input. The predicted class label will be displayed, and the audio will be played on the page.

Code Explanation

  • The template uses CSS styles for styling and responsiveness.
  • It contains a drop area where users can drag and drop the audio file or use the file input to upload.
  • The JavaScript code handles the file upload, sends it to the backend for classification, and displays the predicted class label on the page.
  • The predicted class label is shown in a result div, and the audio is played using the audio element.

Note: This frontend is designed to work with the backend code provided earlier. Make sure to run the Flask web application (app.py) to use this frontend for audio classification.


Frontend code designed and developed by Harsh Gupta (Desperate Enuf).

Project Name

Audio Alchemy : Audio Classification using MFCC and Neural Networks

Project Demo

video Demo

demo

frontend photos

f f1

Installation

  1. Clone the repository to your local machine.
git clone <https://github.com/harshgupta1810/Audio_Classification>

Usage

  1. Download the UrbanSound8K dataset from Kaggle using the provided command.

  2. Run the Jupyter notebook audio_classification.ipynb to preprocess the audio data, extract MFCC features, and create and train the neural network model.

  3. To use the trained model for prediction, load the saved model and pass an audio file through it. An example of doing this is given in the last section of the Jupyter Notebook.

Contributing

If you wish to contribute to this project, please follow the guidelines below:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Commit your changes with descriptive commit messages.
  4. Push your changes to your forked repository.
  5. Create a pull request to the original repository.

Designed and developed by Harsh Gupta (Desperate Enuf).

About

This project is an implementation of audio classification using Mel-Frequency Cepstral Coefficients (MFCC) and Neural Networks

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages