In part_4 we trained our depth-wise convolution model. But in this tutorial we will build upon part_2 again because we only care about increasing accuracy. So we will be adding data augmentation to our normal CNN model.
Data Augmentation
Like we discussed before, CIFAR-100 contains few images per class which makes training a model that generalies harder, so data augmentation can really be helpful to increase our generalizatoin. One of the best libraries out there for image augmentation is imgaug it takes a batch of images and begin to augment them as you specify. Here are some examples:
Here you can find a list of all possible augmentations.
Implementation
After installing the library and importing it, only a few lines will be added to our part_2 implementation. We will be augmenting the training batches randomly before they go into the model.
Here we initialized our augmentation parameters. We will be using them randomly and in any combination. Cropping from the sides, flip left and right ,and dropping parts of the image. Of course you can add as many more as you want.
And we will add one line in our training function to augment the batch:
Results
Training set accuracy:
Test set accuracy:
Final training set accuracy: 97.5%
Final training set loss: 0.1332
Final test set accuracy: 65.3%
Final test set loss: 1.3090
So a simple idea like data augmentation can really increase our accuracy by a large margin like you see here.
If you want to check the full state of the project until now click here to go the repository.
See you in part 6.