Pages

Friday, 30 August 2019

Image Recognition using Pre-trained VGG16 model in Keras

Lets use a pre-trained VGG16 model to predict an image from ImageNet database. We will load an image, convert that image to numpy array, preprocess that array and let the pre-trained VGG16 model predict the image.

VGG16 is a CNN model. To know more about CNN, you can visit my this post. We are not fine-tuning the VGG16 model here. We are using it as it is. To fine-tune the existing VGG16 model, you can visit my this post.

You can download my Jupyter notebook containing following code from here.

Step 1: Import required libraries

import numpy as np
from keras.applications import vgg16
from keras.preprocessing import image


Step 2: Load pre-trained weights from VGG16 model for ImageNet dataset

model = vgg16.VGG16(weights='imagenet')

Step 3: Load image to predict

img = image.load_img('cat.jpg', target_size=(224, 224))
img











Please note that we need to reshape the image to 224X224 as it is a requirement for VGG16 model. You can download this image from ImageNet official website.

Step 4: Convert the image into numpy array

arr = image.img_to_array(img)
arr.shape


(224, 224, 3)

Step 5: Expand the array dimension

arr = np.expand_dims(arr, axis=0)
arr.shape


(1, 224, 224, 3)

Step 6: Preprocess the array

arr = vgg16.preprocess_input(arr)
arr


Step 7: Predict from the model

predictions = model.predict(arr)

predictions

We get an array as an output which is hard to understand. So, lets simplify it and see top 5 predictions made by the VGG16 model.

vgg16.decode_predictions(predictions, top=5)

[[('n02123045', 'tabby', 0.7138179),
  ('n02123159', 'tiger_cat', 0.21695374),
  ('n02124075', 'Egyptian_cat', 0.043560617),
  ('n04040759', 'radiator', 0.0053847637),
  ('n04553703', 'washbasin', 0.0024860944)]]

So, as per VGG16 model prediction, the given image may be a tabby (71%) or a tiger cat (21%). You can try the same with different images from ImageNet database and check your results.

Saturday, 24 August 2019

eBook - Deep Learning Objective Type Questions and Answers

This book contains 205 objective type questions and answers covering various basic concepts of deep learning. It contains 19 chapters. Each chapter contains a short description of a concept and objective type questions from that concept. 

You can download this book from here.


Please download this book, study and distribute among your friends and colleagues. I would be more than happy if this book can increase your deep learning knowledge to some extent.

Assumption: This should not be your first book on deep learning as I have not covered deep learning concepts in detail, just given a short description to revise your concepts. So, I assume, you have some basic understanding of deep learning concepts before reading this book.

I am continuously upgrading this book and adding more and more objective type deep learning questions in this book as well as in deep learning quiz. So, stay tuned! You can visit this page once in a week and download the updated copy of this book.

Your opinion about this book matters a lot to me. Please post your comments and suggestions regarding this eBook on this blog post.

Table of Contents














Deep Learning Concepts

This book contains objective questions on following Deep Learning concepts:

1. Perceptrons: Working of a Perceptron, multi-layer Perceptron, advantages and limitations of Perceptrons, implementing logic gates like AND, OR and XOR with Perceptrons etc.

2. Neural Networks: Layers in a neural network, types of neural networks, deep and shallow neural networks, forward and backward propagation in a neural network etc.

3. Weights and Bias: Importance of weights and biases, things to keep in mind while initializing weights and biases, Xavier Weight Initialization technique etc.

4. Activation Functions: Importance of activation functions, Squashing functions, Step (Threshold), Logistic (Sigmoid), Hyperbolic Tangent (Tanh), ReLU (Rectified Linear Unit), Dying and Leaky ReLU, Softmax etc.

5. Batches: Epochs, Batches and Iterations, Batch Normalization etc.

6. Gradient Descent: Batch, Stochastic and Mini Batch Gradient Descent, SGD variants like Momentum, Nesterov Momentum, AdaGrad, AdaDelta, RMSprop and Adam, Local and Global Minima, Vanishing and Exploding Gradients, Learning Rate etc.

7. Loss Functions: categorical_crossentropy, sparse_categorical_crossentropy etc.

8. CNN: Convolutional Neural Network, Filters (Kernels), Stride, Padding, Zero Padding and Valid Padding, Pooling, Max Pooling, Min Pooling, Average Pooling and Sum Pooling, Hyperparameters in CNN, Capsule Neural Network (CapsNets), ConvNets vs CapsNets, Computer vision etc.

9. RNN: Recurrent Neural Network, Feedback loop, Types of RNN like One to One, One to Many, Many to One and Many to Many, Bidirectional RNN, Advantages and disadvantages of RNN, Applications of RNN, Differences between CNN and RNN etc.

10. LSTM: Long Short Term Memory, Gated cells like Forget gate, Input gate and Output gate, Applications of LSTM etc.

11. Regularization: Overfitting and underfitting in a neural network, L1 and L2 Regularization, Dropout, Data Augmentation, Early Stopping etc.

12. Fine-tuning: Transfer Learning, Fine-tuning a model, Steps to fine-tune a model, Advantages of fine-tuning etc.

13. Autoencoders: Components of an autoencoder like encoder, decoder and bottleneck, Latent space representation and reconstruction loss, Types of Autoencoders like Undercomplete autoencoder, Sparse autoencoder, Denoising autoencoder, Convolutional autoencoder, Contractive autoencoders and Deep autoencoders, Hyperparameters in an autoencoder, Applications of an Autoencoder, Autoencoders vs PCA, RBM (Restricted Boltzman Machine) etc.

14. NLP (Natural Language Processing): Tokenization, Stemming, Lemmatization and Vectorization (Count vectorization, N-grams vectorization, Term Frequency - Inverse Document Frequency (TF-IDF)), Document-term matrix, NLTK( Natural Language Toolkit) etc.

15. Frameworks: TesnorFlow, Keras, PyTorch, Theano, CNTK, Caffe, MXNet, DL4J etc.

A note to readers

This book is just a short summary of my online contents on:

1. The Professionals Point
2. Online ML Quiz

Contents of this book are available on my this blog (The Professionals Point) and objective type questions are available in the form of quiz on my website (Online ML Quiz). 

Disclaimer: Contents of this book are the sole property of www.onlinemlquiz.com. Questions should not be reproduced in any form without prior permission and attribution.

Saturday, 10 August 2019

Solving a regression problem using a Sequential Neural Network Model in Keras

Lets solve a regression problem using neural networks. We will build a sequential model in Keras to predict house prices based on some parameters. We will use KerasRegressor to build a regression model.

You can download housing_data.csv from here. You can also download my Jupyter notebook containing below code of Neural Network Regression implementation.

Step 1: Import required libraries like pandas, numpy, sklearn, keras and matplotlib

import numpy as np
import pandas as pd

from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error, mean_squared_error

from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor

import matplotlib.pyplot as plt
%matplotlib inline

Step 2: Load and examine the dataset

dataset = pd.read_csv('housing_data.csv')
dataset.head()
dataset.shape
dataset.describe(include='all')

Please note that "describe()" is used to display the statistical values of the data like mean and standard deviation.

Step 3: Mention X and Y axis

X=dataset.iloc[:,0:13]
y=dataset.iloc[:,13].values

X contains the list of attributes
Y contains the list of labels

Step 4: Split the dataset into training and testing dataset

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state=0) 

Step 5: Scale the features

scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
y = y.reshape(-1,1)
y = scaler.fit_transform(y)

This step is must for neural networks. Feature scaling is very important for neural networks to perform better and predict accurate results. We should scale both X and y data.

Step 6: Build a neural network

def build_regression_model():
    model = Sequential()
    model.add(Dense(50, input_dim=13, activation='relu'))
    model.add(Dense(50, activation='relu'))
    model.add(Dense(50, activation='relu'))
    model.add(Dense(1, activation='linear'))
    model.compile(optimizer='adam', loss='mean_squared_error')
    return model

We are creating a sequential model with fully connected layers. We are using four layers (one input layer, one output layer and two hidden layers). Input layer and hidden layers are using "relu" activation function while output layer is using "linear" activation function. 

Input layer and hidden layers contain 50 neurons and output layer contains only one neuron as we need to output only one value (predicted house price). You can change the number of neurons in the input and hidden layers as per your data and model performance. Number of hidden layers and number of neurons in each layer are the hyperparameters which you need to tune as the the performance of the model. 

We are using "adam" optimizer and mean square error as a loss function.

We can also use dropout in hidden layers for regularization. But, for this example, I am skipping this step for simplification.

Step 7: Train the neural network

regressor = KerasRegressor(build_fn=build_regression_model, batch_size=32, epochs=150) 
training_history = regressor.fit(X_train,y_train)

We are using 150 epochs with batch size of 32. Number of epochs and batch size are also the hyperparameters which need to be tuned. 

Step 8: Print a loss plot

plt.plot(training_history.history['loss'])
plt.show()




















This plot shows that after around 140 epochs, the loss does not vary so much. That is why, I have taken number of epochs as 150 in step 7 while training the neural network.

Step 9: Predict from the neural network

y_pred= regressor.predict(X_test)
y_pred

The y_pred is a numpy array that contains all the predicted values for the input values in the X_test.

Lets see the difference between the actual and predicted values.

df=pd.DataFrame({'Actual':y_test, 'Predicted':y_pred})  
df 

Step 10: Check the accuracy

meanAbsoluteError = mean_absolute_error(y_test, y_pred)
meanSquaredError = mean_squared_error(y_test, y_pred)
rootMeanSquaredError = np.sqrt(meanSquaredError)
print('Mean Absolute Error:', meanAbsoluteError)  
print('Mean Squared Error:', meanSquaredError)  
print('Root Mean Squared Error:', rootMeanSquaredError)

Output:
Mean Absolute Error: 2.9524098807690193 Mean Squared Error: 19.836363961675836 Root Mean Squared Error: 4.453803314210882

We have got the root mean square error as 4.45. We can further decrease this error using cross validation and tuning our hyperparameters. I am leaving it for you to practice.

Step 11: Visualize the results using scatter plot 

plt.scatter(range(len(y_test)), y_test, c='g')
plt.scatter(range(len(y_test)), y_pred, c='b')
plt.xlabel('Test data')
plt.ylabel('Predicted data')
plt.show()





















We are displaying test labels and predicted values in different colors (green and blue). From the scatter plot, we can visualize that our neural network has done a great job.

Step 12: Visualize results using regression plot

To further visualize the predicted results, we can draw a regression plot.

fig, ax = plt.subplots()
ax.scatter(y_test, y_pred)
ax.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'k--', lw=4)
ax.set_xlabel('Test data')
ax.set_ylabel('Predicted data')
plt.show()





















I hope, I was able to demonstrate this regression problem to a large extent. If you have further any doubt, please post a comment.

About the Author

I have more than 10 years of experience in IT industry. Linkedin Profile

I am currently messing up with neural networks in deep learning. I am learning Python, TensorFlow and Keras.

Author: I am an author of a book on deep learning.

Quiz: I run an online quiz on machine learning and deep learning.