-
Notifications
You must be signed in to change notification settings - Fork 13
Adding a feature or Testing our model
We made it as easy as possible to add features to the model, nevertheless there's still some work to do.
As is explained in Feature Extraction, we initialize signal objects to pass into functions.
If you have a new feature, make sure to implement it's base information as an attribute in the Signal Class init function. A good example of this is having an attribute for the indices of R peaks and subsequent RR Intervals,
self.RPeaks = wave.getRPeaks(self.data, sampling_rate=self.sampling_rate)
self.RRintervals = wave.interval(self.RPeaks)
but not storing the variance or mean RR interval in the object, because we only use it once in the feature extraction function:
features.append(np.var(sig.RRintervals))
Basically, if you have a new feautre and it acts as the basis for other features, add it to the signal object. Otherwise read on!
The getFeatures() function is pretty simple. There's a list of features and we just keep adding to it. Later on each returned list is added as a row in the matrix.
def getFeatures(sig):
features = [sig.name]
features += list(sig.RRbinsN)
features.append(np.var(sig.RRintervals))
When you want to add a feature, simply add it to the features list in this function. Then...
The saveSignalFeatures() function is meant to hardcode all the features for every signal so we don't have to re-derive them every time we run our model.
When you add a feature to the getFeatures(), you need to re run this function to create a new harcoded_features.csv.
You'll see that when we want to run our feature_extract() we call the getFeaturesHardcoded() function which uses the csv instead of initializing a new signal object for every record.
Do these 3 things and you're good to go!
For the time being, this procedure uses scikit-learn to create an svm model, and pca to narrow down the feature space. This all happens in python, though @Alex and @Andy are working on automating the testing of the R based multinomial logistic regression.
The steps are pretty simple:
- Run the feature extract function, this will create a feature_matrices pickle object for the model to use
- Run the model function, this will create a pickle object of the pca and svm models.
- Comment out the lines you wrote to run the previous functions (1,2), and go to Score.py and just run the file. It'll take a minute, but eventually it will score the results of the new model with your newly implemented feature.