Skip to content

Commit 1d57b58

Browse files
authored
Update README.md
1 parent 8c8ab32 commit 1d57b58

File tree

1 file changed

+13
-1
lines changed

1 file changed

+13
-1
lines changed

README.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,14 @@
11
# Deep_Learning_projects
2-
Various projects in Deep Learning domain
2+
Following projects can be found in this repository:-
3+
1. **CNN, its basics and its understanding** [LINK](https://github.com/coderop2/Deep_Learning_projects/blob/main/CNN_and_its_understanding.ipynb) - This notebook uses MNIST as data and trains a simple CNN. Which is further used to understand how it works on a lyer by layer basis. Here i use each layer to understand that if a full CNN is trained then how good of a classifier is each of the netwroks layer, which i have preseneted and studied through the use of PCA and T-SNE decompostion. Experiment done :-
4+
- Using random 10 units from a layer to answer the question how good of a classfier can that make ? Turns out that there are some combination of units which when selected will actually give a pretty good result (defenitly not as good as the final classifier layer).
5+
- Further experimentation is done on how does different weights initialization (both kernels and bias) affect the training/test/validation accuracy and error rates.
6+
- Comparison between the use of dropout and no dropout - looks like dropout helps in creating a more generalized model which fits testing and validation data well.
7+
2. **CNN with/out Augmentation** [LINK](https://github.com/coderop2/Deep_Learning_projects/blob/main/CNN_with_out_augmentation.ipynb) - Although we are in the age of data where we can get data very readily on any topic or area that we desire, the problem is the amount of data we have. While for some cases we have so much data that the model might not be able to handle it, while in other we have so little that the model is not able to generalize well and ends up getting over fitting to the data thats present. So in order to avoid that and understand the use of augmentation i studied the effect of use of augmentation and not augmentation of the CIFAR 10 dataset with the use of CNNs and the network. Experimentation done :-
8+
- Augmentation Vs Non-augmented dataset - Augmentation won with over 3% better results, provided i was only using 3 augmentation techniques.
9+
- Transfer learning on a smaller subset of dataset from CIFAR and then further studing the effects of augmentation
10+
3. **RNN on Audio Data** [LINK]() - A simple RNN to leverage the use to sequential data in the audio file and train them on bidirectional LSTMs, that is after converting them into frequency and time domain which actually brings out the wave form of the audio files.
11+
4. **CNN on Audio Data** [LINK](https://github.com/coderop2/Deep_Learning_projects/blob/main/CNN_on_Audio.ipynb) - Although there might be a debate as too which might bring the most of audio files CNNs or RNNs. I simply ignore these debates and focus on FCNs and CNNs. Experiments done :-
12+
- Training and comparing audio files between a 5 layer fully connected neural network and a 5 layer deep CNN. The comparison was done using SNR for before and after. Althouh not surprising CNNs performed better than FCNs as they were able accomodate better the different peks and valleys of the signal than a FCN.
13+
- Further use a CNN to predict the secind half of the audio file given the first half (something like a audio GAN).
14+
5.

0 commit comments

Comments
 (0)