Skip to content

ankit5054/DEEP-LEARNING

Repository files navigation

DEEP-LEARNING

1_goFgCUHprcroxSLZvROjpg

Multi Layer Perceptron (MLP)

I have applied the MLP on the MNIST dataset

ABBREVATIONS
1.BN-> BATCH NORMALISATION
2.DP-> DROPOUTS
3.IP-> INPUT LAYER
4.OP-> OUTPUT LAYER

FIRST MODEL

NO. OF HIDDEN LAYERS: 2
ARCHITECTURE OF MODEL: IP->400->BN->DP->150->BN->DP->OP
Categorical Crossentropy Loss: 0.0617
ACCURACY: 98.14%

SECOND MODEL

NO. OF HIDDEN LAYERS: 3
ARCHITECTURE OF MODEL: IP->600->BN->DP->300->BN->DP->250->BN->DP->OP
Categorical Crossentropy Loss: 0.0585
ACCURACY: 98.37%

THIRD MODEL

NO. OF HIDDEN LAYERS: 5
ARCHITECTURE OF MODEL: IP->600->BN->DP->500->BN->DP->400->BN->DP->300->BN->DP->250->BN->DP->OP
Categorical Crossentropy Loss: 0.0601
ACCURACY: 98.32%

FOURTH MODEL

NO. OF HIDDEN LAYERS: 5 ONLY WITH DROPOUTS
ARCHITECTURE OF MODEL: IP->700->DP->550->DP->390->DP->290->DP->670->DP->OP
Categorical Crossentropy Loss: 0.6135
ACCURACY: 74.38%

FIFTH MODEL

NO. OF HIDDEN LAYERS: 4 WITH ALTERNATIVE DROPOUTS
ARCHITECTURE OF MODEL: IP->790->390->DP->290->670->DP->OP
Categorical Crossentropy Loss: 0.469
ACCURACY: 82.42%

Convolutional Neural Network

I have applied the MLP on the MNIST dataset

ABBREVATIONS
1.BN-> BATCH NORMALISATION
2.DP-> DROPOUTS
3.IP-> INPUT LAYER
4.OP-> OUTPUT LAYER
5.FL-> FLATTEN LAYER
6.CN-> CONVOLUTIONAL LAYER
7.PL-> MAX POOLING
8.DN-> DENSE LAYER

FIRST MODEL

NO. OF HIDDEN LAYERS: 3
ARCHITECTURE OF MODEL: IP->CN->DP->CN->CN->PL->DP->FL->DN->DP->DN
Categorical Crossentropy Loss: 0.02223
ACCURACY: 99.31%

SECOND MODEL

NO. OF HIDDEN LAYERS: 5
ARCHITECTURE OF MODEL: IP->CN->DP->BN->CN->PL->DP->CN->PL->BN->DP->CN->DP->BN->PL->FL->DN->DP->DN->OP
Categorical Crossentropy Loss: 0.0969
ACCURACY: 98.25%

THIRD MODEL

NO. OF HIDDEN LAYERS: 7
ARCHITECTURE OF MODEL: IP->CN->DP->BN->PL->CN->BN->DP->PL->CN->DP->BN->PL->CN->BN->DP->CN->BN->DP->PL->CN->DP->BN->PL->CN->BN->DP->PL->FL->DN->DN->OP
Categorical Crossentropy Loss: 0.0603
ACCURACY: 98.48%

FOURTH MODEL

NO. OF HIDDEN LAYERS: 3(NO BN AND DP)
ARCHITECTURE OF MODEL: IP->CN->CN->CN->PL->FL->DN->DN
Categorical Crossentropy Loss: 0.0257
ACCURACY: 99.33%

Long Short-Term Memory (LSTM)

ThE LSTM is applied on Text Data.

FIRST MODEL

Layers: 1 Architecture: EMBEDDING->LSTM->DROPOUT->DENSE Loss: 0.6054 Accuracy: 89.26%

SECOND MODEL

Layers: 2 Architecture: EMBEDDING->LSTM->LSTM->DENSE Loss: 0.7312 Accuracy: 89.41%