FreezeGuard is a project aimed at predicting gait freeze events using wearable sensor data. This project involves data collection, preprocessing, model training, and evaluation to predict freezing events in gait.
- Data Collection: Using wearable sensors to collect gait data.
- Data Preprocessing: Cleaning and preparing data for model training.
- Model Training: Training machine learning models to predict freeze events.
- Evaluation: Assessing model performance using various metrics.
- Python
- Jupyter Notebook
- Pandas
- Scikit-learn
- NumPy
The dataset used in this project consists of sensor data collected from wearable devices. The data includes various features that help in predicting gait freeze events.
- Clone the repository:
git clone https://github.com/dasdebanna/FreezeGuard-Predicting-Gait-Freeze-Events-using-Wearable-Sensors.git
- Navigate to the project directory:
cd FreezeGuard-Predicting-Gait-Freeze-Events-using-Wearable-Sensors
- Install the dependencies:
pip install -r requirements.txt
- Open the Jupyter Notebook:
jupyter notebook
- Open the
problem-solution.ipynb
notebook. - Follow the instructions in the notebook to preprocess the data, train the model, and evaluate the performance.
The model training process involves:
- Loading the dataset: Import the dataset and load it into a Pandas DataFrame.
- Data preprocessing:
- Handle missing values.
- Normalize the data to ensure all features contribute equally to the model.
- Feature extraction to create meaningful input features from raw data.
- Splitting the data: Divide the data into training and testing sets using a train-test split approach.
- Model selection: Choose appropriate machine learning algorithms (e.g., Random Forest, SVM).
- Training the model: Train the selected model using the training data.
- Hyperparameter tuning: Optimize model performance by tuning hyperparameters using techniques like Grid Search or Random Search.
- Model evaluation: Assess the model on the training data using cross-validation to prevent overfitting.
- Saving the model: Save the trained model for future use with libraries like
joblib
.
Model performance is evaluated using various metrics such as:
- Accuracy: Proportion of correctly predicted instances out of the total instances.
- Precision: Proportion of true positive predictions out of the total positive predictions.
- Recall: Proportion of true positive predictions out of the actual positives.
- F1-Score: Harmonic mean of precision and recall, providing a balance between the two.
- Confusion Matrix: Visual representation of the performance of an algorithm.
- ROC Curve and AUC: Evaluate the trade-off between true positive rate and false positive rate.
The evaluation process includes:
- Making predictions on the test set.
- Calculating performance metrics to determine the model’s effectiveness.
- Analyzing misclassifications to identify areas for model improvement.
- Comparing different models and selecting the best-performing one.
Contributions are welcome! Please follow these steps to contribute:
- Fork the repository.
- Create a new branch:
git checkout -b feature-branch
- Make your changes and commit them:
git commit -m "Add new feature"
- Push to the branch:
git push origin feature-branch
- Create a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.