100 Days of Machine Learning Coding as proposed by Siraj Raval
Today's Work :- Today I have installed all the tools and packages required for this challenge.
Check out the code from here.
Today's Work :- I have completed the most crucial Data Preprocessing step on the following dataset.
Check out the code from here.
Today's work:- I have applied Simple Linear Regression on the following dataset and obtained following graphs for training and test prediction.
Check out the code from here.
Today's work:- I have applied Multiple Linear Regression on the following dataset and also applied Backward Elimintion method to get the best model.
Check out the code from here.
Today's work:- I have applied Polynomial Regression on the following dataset to predict weather a employee is telling truth or bluff about his salary and got the following graphs for Linear and Polynomial Regression.
Check out the code from here.
Today's work:- I have applied Support Vector Regression(SVR) on the following dataset to predict weather a employee is telling truth or bluff about his salary and got the following graphs for Linear and Polynomial Regression.
Check out the code from here.
Today's work:- I have applied Decision Regression on the following dataset to predict weather a employee is telling truth or bluff about his salary and got the following graph for Decision Tree Regression and and also ploted the Tree.
Check out the code from here.
Today's work:- I have applied Random Forest Regression on the following dataset to predict weather a employee is telling truth or bluff about his salary and got the following graphs for Decision Tree 10, 100 and 500 Decision Trees.
Today's work:- I have learnt about the R square and Estimated R square and also find it on the the following dataset by various studied algorithms till now. I also studied about the pros and cons of various algorithms which I have studied till now.
Check out the Analysis from here
Project's work:- I have done a project for board game review prediction using regression on the following dataset with all knowledge of regresssion and data preprocessing obtained till now. I also learnt and used new techniques of regression in this project.
Check out the code from here
Today's work:- I have applied Logistic Regression on the following dataset to predict weather a person buy's a SUV car for a company and obtained the following graphs for training and test data sets.
Check out the code from here
Today's work:- I have applied K-NN Classifier on the following dataset to predict weather a person buy's a SUV car for a company and obtained the following graphs for training and test data sets.
Check out the code from here
Today's work:- I have applied Linear Support Vector Machine(SVM) on the following dataset to predict weather a person buy's a SUV car for a company and obtained the following graphs for training and test data sets.
Check out the code from here
Today's work:- I have applied Kernal Support Vector Machine(K-SVM) on the following dataset to predict weather a person buy's a SUV car for a company and obtained the following graphs for training and test data sets.
Check out the code from here
Today's work:- Today I have learnt about the Bayes Theoram and its application. Then, I have applied Naive-Bayes algorithm on the following dataset to predict weather a person buy's a SUV car for a company and obtained the following graphs for training and test data sets.
Check out the code from here
Today's work:- I have applied Decision Tree Classification on the following dataset to predict weather a person buy's a SUV car for a company and obtained the following graphs for training and test data sets and I also visulaized it by ploting an actuall tree.
Check out the code from here
Today's work:- I have applied Random Forest Classification on the following dataset to predict weather a person buy's a SUV car for a company and obtained the following graphs for training and test data sets. Then I compared the results with changing the values of number of trees and obtained the following graph.
Today's work:- Today, I studied about the Accuraccy Paradox of the Confusion Matrix and also analyse it using CAP Curve analysis method for getting the best possible model in various classification algorithms. I also find various pros and cons of various classification algorithms.
Check out the Jupyter Notebook from here
Project's work:- I have done a project for credit card fraud detection using classification with all knowledge of classification and data preprocessing obtained till now. I also learnt and used new techniques of classification in this project.
Check out the code from here
Today's work:- I have applied K-Means Clustering algorithm on the following dataset to predict spending score of person based on their anual income in a mall by making clusters. I first aplied the Elbow Method to obtain the threshhold value of K as shown in following graph in python and R, then plotted the curve based on value of k obtained and plotted the following graphs in Python and R.
Python | R |
---|---|
Check out the project here
Check out the Jupyter Notebook from here
Project's work:- In this project, we will use a K-means algorithm to perform image classification. Clustering isn't limited to the consumer information and population sciences, it can be used for imagery analysis as well. Leveraging Scikit-learn and the MNIST dataset, we will investigate the use of K-means clustering for computer vision.
For this project, we will be using the MNIST dataset. It is available through keras, a deep learning library we have used in previous tutorials. Although we won't be using other features of keras today, it will save us time to import mnist from this library. It is also available through the tensorflow library or for download at http://yann.lecun.com/exdb/mnist/.
Check out the code from here
Today's work:- I have applied Hierarchical-Clustering algorithm on the following dataset to predict spending score of person based on their anual income in a mall by making clusters. I first find out the Dendogram of the dataset to obtain the threshhold value of K as shown in following Dendogram in python and R, then plotted the curve based on value of k obtained and plotted the following graphs in Python and R.
Python | R |
---|---|
Check out the code from here
Today's work:- I have applied Apiori Association Rule Learning algorithm on the following dataset to pridict the optimized placement of objects in a supermarket for maximizing the revenue.
Check out the code from here
Today's work:- I have applied Eclat Association Rule Learning algorithm on the following dataset to pridict the optimized placement of objects in a supermarket for maximizing the sale.
Check out the code from here
Today's work:- I have applied Upper Confidence Bound (UCB) Reinforcement Learning Algorithm on the following dataset to pridict the best Ad for a car company based on their performance in different social media. It was done by giving 1 reward for if the user interacted with the Ad and 0 if he does't. I obtained the following Histogram based on the prediction.
Check out the code from here
Today's work:- I have applied Thompson Sampling Reinforcement Learning Algorithm on the following dataset to pridict the best Ad for a car company based on their performance in different social media. It was done by giving 1 reward for if the user interacted with the Ad and 0 if he does't. I obtained the following Histogram based on the prediction.
Check out the code from here
Today's work:- I have processed the following dataset for getting the best sparse matrix for pridicting weather the customer likes the food in a restaurant based on their reviews. I first removed the unnecessory words using NLP then obtained a sparse matrix of maximum 1500 features. I then, applied the Naive Bayes Classification algorithm and Random Forest Classification algorithm to pridict weather the customer likes the food or not.
Project's work:- Text classification using a simple support vector classifier on a dataset of positive and negative movie reviews. The data set we will be using comes from the UCI Machine Learning Repository. It contains over 5000 SMS labeled messages that have been collected for mobile phone spam research. It can be downloaded from the following URL:
Project's work:- I have done a project of Natural Language Processing in movies review corpus from nlkt library in which I have classified weather a review is possitive or negative using Support Vector Classifier.
Check out the code from here
Today's work:- I have processed the following dataset which has 12 features to reduce it to lesser no of fetures based on Principal Component Analysis Method which works on varience of feture. Then, I applied Support Vector Machine algorithm to pridict the customer segment based on their wine selection.
Project's work:- This project would focus on mapping high dimensional data to a lower dimensional space, a necessary step for projects that utilize data compression or data visualizations. As the ethical discussions surrounding AI continue to grow, scientists and businesses alike are using visualizations of high dimensional data to explain results.
During this project, we will perform K Means clustering on the well known Iris data set, which contains 3 classes of 50 instances each, where each class refers to a type of iris plant. To visualize the clusters, we will use principle component analysis (PCA) to reduce the number of features in the dataset..
Check out the code from here
Today's work:- I have processed the following dataset which has 12 features to reduce it to lesser no of fetures based on Linear Discriminant Analysis Method. Then, I applied Support Vector Machine algorithm to pridict the customer segment based on their wine selection.
Check out the code from here
Today's work:- I have processed the following dataset to reduce the dataset to lesser no of fetures based on Kernal PCA of Gaussian Kernal. Then, I applied Logistic Regression to pridict the weather a person buy's a SUV car for a company.
Check out the code from here
Today's work:- I have Applied K-Fold Cross Validations Technique on the following dataset for dividing the dataset into 10 parts to get the best model and got an average accuraccy of 91 percent.
Check out the code from here
Today's work:- I have Applied Grid Search Algorithm on the following dataset for getting the best model. I first applied linear kernal with penalities 1, 10, 100, 1000 then applied rbf kernal with penalities 1, 10, 100, 1000 and got an average accuraccy of 90 percent.
Check out the code from here
Today's work:- I have applied XGBoost on the following dataset for obtaining the best model and got an accuracy of 87 percent.
Check out the code from here
Today's work:- I have created Artificial Neural Network for the following dataset for predicting weather the customer of the bank exited or not by creating a Artificial Neural Network of one input layer having 6 nodes, one hidden layer of 6 nodes and one output layer having 1 nodes. I further obtained an accuracy of 86 precent by obtaining a confusion matrix.
Check out the code from here
Today's work:- I have created Convolutional Neural Network for predicting weather the image is of cat or dog using 5000 images of both(4000 training and 1000 test) and got an Accuracy of 85 percent. I first applied the convolution operation with ReLU as activation function on each image then, applied pooling step and then flatterened the result and fed it into the Neural Network and obtained the above prediction successfully.
Check out the code from here
Today's work:- I have created Recurrent Neural Network for predicting weather the google stock price of 2017 using previous 3 months data to pridict next day's stock price. I first applied Sequential Kreas model with 50 inputs neurons then 3 hidden layers of 50 neurons each and finally one output layer of one neuron , finally obtained the following graph.
Check out the code from here
Today's work:- I have created Self Organizing Maps (SOM) on dataset for predicting weather the user is fraud or not when he applies for a cridit card application. I have used a third party library minisom to obtain the following Self Organizing Map.
Check out the code from here
Today's work:- Yesterday, I have created Supervised Self Organizing Maps (SOM) on dataset for predicting weather the user is fraud or not when he applies for a cridit card application. I have used a third party library minisom to obtain the Self Organizing Map. Today, I have made changes to existing one and takeing it from Supervised to UnSupervised Learning by creating a Sequential Kreas Neural Network with adam optimizer.
Check out the code from here
Today's work:- I have created UnSupervised movie recommendation system for recommending a movie to a new user using Deep Boltzmann Machine.
Check out the code from here
Today's work:- I have created UnSupervised movie recommendation system for recommending a movie to a new user using AutoEncoders.
Check out the code from here
Project's work:- Worked on a chatbot project based on Sequence to Sequence (seq2seq) Deep Natural Language Processing model in which I have used Cornell Movie-Dialogs Dataset which consist of 220,579 conversational exchanges between 10,292 pairs of movie characters which involves 9,035 characters from 617 movies.
Check out the code from here
Today's work:- I have applied opencv for detecting faces using Cascade Classifier with haarcascade_frontalface_default.xml as detector.
Check out the code from here
Today's work:- I have applied opencv for detecting smile in face using Cascade Classifier with haarcascade_smile.xml as detector.
Check out the code from here
Today's work:- I have applied Single Shot Detection(SSD) algorithmn for object detection on funny_dog.mp4 for detecting objects in this video.
Check out the project here
Check out the Jupyter Notebook from here
Project's work:- In this project, I have deployed a convolutional neural network (CNN) for object recognition. More specifically, I have used the All-CNN network published in the 2015 ICLR paper, "Striving For Simplicity: The All Convolutional Net". This paper can be found at the following link
Check out the project here
Check out the Jupyter Notebook from here
Project's work:- The goal of super-resolution (SR) is to recover a high resolution image from a low resolution input, or as they might say on any modern crime show, enhance!
To accomplish this goal, we will be deploying the super-resolution convolution neural network (SRCNN) using Keras. This network was published in the paper, "Image Super-Resolution Using Deep Convolutional Networks" by Chao Dong, et al. in 2014. You can read the full paper at https://arxiv.org/abs/1501.00092.
Check out the code from here
Today's work:- I have created and defined the Generator for GAN which will generate images for Discriminator to distinguish between a real and a generated image as best as it could when an image is fed.
Check out the code from here
Today's work:- I have created and defined the Discriminator for GAN which will distinguish between a real and a generated image as best as it could when an image is fed from generator. Further, I have trained the discriminator with real and generated image with backpropogated the total error and updating the weights of the neural network of the generator simultaneously. I simultaneously printed the losses and saving the real images and the generated images of the minibatch in every 100th step and finally obtained the following result.