Online Machine Learning Quiz

100+ Objective Machine Learning Questions. Lets see how many can you answer?

Start Quiz

Saturday, 6 July 2019

Fine-tune VGG16 model for image classification in Keras

Keras framework provides us a lot of pre-trained general purpose deep learning models which we can fine-tune as per our requirements. We don't need to build a complex model from scratch. In my last article, we built a CNN model from scratch for image classification. Instead of that, we can just fine-tune an existing, well-trained, well-proven, widely accepted CNN model which will save our a lot of effort, time and money.

VGG16 is a proven proficient algorithm for image classification (1000 classes of images). Keras framework already contain this model. We will import this model and fine-tune it to classify the images of dogs and cats (only 2 classes instead of 1000 classes).

You can download my Jupyter notebook containing below code from here.

Step 1: Import the required libraries

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam

from sklearn.metrics import confusion_matrix, accuracy_score, classification_report

Step 2: Create directory structure to contain images

We will create a directory structure which will contain the images of dogs and cats.






















I have created a directory "cats_and_dogs". Under this directory, I have created 3 other directories "test", "train" and "valid". All these 3 directories contain "cat" and "dog" directories. 

1. "cat" and "dog" directories under "test" directory contain 5 images of cats and dogs respectively. Total 10 images for testing.

2. "cat" and "dog" directories under "train" directory contain 20 images of cats and dogs respectively. Total 40 images for training.

3. "cat" and "dog" directories under "valid" directory contain 8 images of cats and dogs respectively. Total 16 images for validation.

Step 3: Data Preparation

train_path = 'C:/cats_and_dogs/train'
valid_path = 'C:/cats_and_dogs/valid'
test_path = 'C:/cats_and_dogs/test'

train_batches = ImageDataGenerator().flow_from_directory(train_path, target_size=(224,224), classes=['dog','cat'], batch_size=10)

valid_batches = ImageDataGenerator().flow_from_directory(valid_path, target_size=(224,224), classes=['dog','cat'], batch_size=4)

test_batches = ImageDataGenerator().flow_from_directory(test_path, target_size=(224,224), classes=['dog','cat'], batch_size=10)

Output:
Found 40 images belonging to 2 classes. Found 16 images belonging to 2 classes. Found 10 images belonging to 2 classes.

In the above code, we are generating the images of 224x224 pixels and categorizing these images into cat and dog classes. It is clear from the output that we have 40 images for training, 16 images for validation and 10 images for testing as mentioned in step 2.

Step 4: Print the images

Lets output some of the images which we have prepared in step 3. Following is the standard code to print the images (copied from Keras documentation)

def plots(ims, figsize=(12,6), rows=1, interp=False, titles=None):
    if type(ims[0]) is np.ndarray:
        ims = np.array(ims).astype(np.uint8)
        if (ims.shape[-1] != 3):
            ims = ims.transpose((0,2,3,1))
    f = plt.figure(figsize=figsize)
    cols = len(ims)//rows if len(ims) % 2 == 0 else len(ims)//rows + 1
    for i in range(len(ims)):
        sp = f.add_subplot(rows, cols, i+1)
        sp.axis('Off')
        if titles is not None:
            sp.set_title(titles[i], fontsize=16)
        plt.imshow(ims[i], interpolation=None if interp else 'none')

Now, lets print the first batch of training images:

imgs, labels = next(train_batches)
plots(imgs, titles=labels)

Output:





We can see the scaled images of 10 cats and dogs. If you run again the above code, it will fetch next 10 images from training dataset as we are using batch size of 10 for training images.

Step 5: Load and analyze VGG16 model

vgg16_model = keras.applications.vgg16.VGG16()
vgg16_model.summary()
type(vgg16_model)

In the above code, first line will load the VGG16 model. It may take some time. By executing second line, we can see summary of the existing model. It has a lot of convolutional, pooling and dense layers. Executing third line, we can see this model is of type "Model". In next step, we will create a model of type "Sequential".

Step 6: Fine-tune VGG16 model

Following are the steps involved in fine-tuning a model:

1. Copy all the hidden layers in a new model
2. Remove output layer
3. Freeze the hidden layers
4. Add custom output layer

For more details on fine-tuning a model, please visit my this post.

Lets perform all the above steps.

model = Sequential() for layer in vgg16_model.layers[:-1]: model.add(layer)

In the above code, we have created a new sequential model and copied all the layers of VGG16 model except the last layer which is an output layer. We have done this because we want our custom output layer which will have only two nodes as our image classification problem has only two classes (cats and dogs).

Now, if we execute following statement, we will get replica of existing VGG16 model, except output layer.

model.summary()

Now, lets freeze the hidden layers as we don't want to change any weight and bias associated with these layers. We want to use these layers as it is as all these layers are already well trained on image classification problem.

for layer in model.layers: layer.trainable = False

Now, add a custom output layer with only two nodes and softmax as activation function.

model.add(Dense(2, activation='softmax'))
model.summary()

Now, our new fine-tuned model is ready. Lets train it with new data and then predict from it.

Step 7: Compile the model

model.compile(Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])

Using Adam as an optimizer and categorical cross entropy as loss function.

Step 8: Train the model

model.fit_generator(train_batches, steps_per_epoch=4, validation_data=valid_batches, validation_steps=4, epochs=5, verbose=2)

Executing this step will take some time as we are using 5 epochs.

Step 9: Predict from the model

Lets print first batch of the test images.

test_imgs, test_labels = next(test_batches) plots(test_imgs, titles=test_labels)

From the output, we can see that it shows the final results in form of [0. 1.], [1. 0.] etc. Lets format this output so that we can get it in form of 0, 1 etc.

test_labels = test_labels[:,0] test_labels

Now, finally make prediction.

predictions = model.predict_generator(test_batches, steps=1, verbose=0)
predictions

It shows the predictions in form of probabilities. Lets round it off.

rounded_predictions = np.round(predictions[:,0])
rounded_predictions

Step 10: Check the accuracy

confusionMatrix = confusion_matrix(test_labels, rounded_predictions)
accuracyScore = accuracy_score(test_labels, rounded_predictions)
classificationReport = classification_report(test_labels, rounded_predictions)
print(confusionMatrix)
print(accuracyScore * 100)
print(classificationReport)

Please note that we won't get desired accuracy with this small dataset. We need thousands of image to train our model to get desired accuracy. We can use data augmentation to increase the data. You can download thousands of images of cats and dogs from Kaggle to train this model.

Building a CNN model in Keras using MNIST dataset

We will implement CNN in Keras using MNIST dataset. To know more about CNN, you can visit my this postWe can download the MNIST dataset through Keras. The MNIST dataset contains images of handwritten digits from 0 to 9. It is divided into 60,000 training images and 10,000 testing images.

I would recommend you to build a simple neural network before jumping to CNN. You can visit my this post to build a simple neural network with Keras. You can download my Jupyter notebook containing following code of CNN from here.

Step 1: Import required libraries

import keras
from keras.datasets import mnist
from keras.utils import to_categorical

from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from keras.optimizers import Adam

Step 2: Load MNIST dataset from Keras

(x_train, y_train), (x_test, y_test) = mnist.load_data()

Above line will download the MNIST dataset. Now, lets print the shape of the data.

x_train.shape, x_test.shape, y_train.shape, y_test.shape

Output: ((60000, 28, 28), (10000, 28, 28), (60000,), (10000,))

It is clear from the above output that each image in the MNIST dataset has a size of 28 X 28 pixels which means that the shape of x_train is (60000, 28, 28) where 60,000 is the number of samples. 

Step 3: Reshape the dataset

We have to reshape the x_train from 3 dimensions to 4 dimensions as it is a requirement to process through Keras API. We reshape x_train and x_test because our CNN accepts only a four-dimensional vector. 

x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)

The value of "x_train.shape[0]" is 60,000. The value 60,000 represents the number of images in the training data, 28 represents the image size and 1 represents the number of channels. 

The number of channels is set to 1 if the image is in grayscale and if the image is in RGB format, the number of channels is set to 3. The last number is 1, which signifies that the images are greyscale.

Now, lets again print the shape of the data.

x_train.shape, x_test.shape, y_train.shape, y_test.shape

Output: ((60000, 28, 28, 1), (10000, 28, 28, 1), (60000,), (10000,))

Step 4: Convert labels into categorical variables (one-hot encoding)

Our labels are ranging from 0 to 9. So, we need to one-hot encode these labels so that these turn into 0 and 1.

y_train, y_test

y_train = to_categorical(y_train)
y_test = to_categorical(y_test)

y_train, y_test
y_train[0]

Step 4: Create a CNN model

model = Sequential()
model.add(Conv2D(32, kernel_size=(5,5), activation='relu', input_shape=(28,28,1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size=(5,5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))

Explanation of above CNN model:

1. We have created a sequential model which is an in-built model in Keras. We just have to add the layers in this model as per our requirements. In this case, we have added 2 convolutional layers, 2 pooling layers, 1 flatten layer, 2 dense layers and 1 dropout layer. 

2. We have used 32 filters with size 5x5 each in first convolutional layer and then 64 filters in the second convolutional layer.

3. After each convolutional layer, we are adding pooling layer with pool size of 2x2.

4. We are using ReLU activation function in all hidden layers and softmax in output layer. To know more about activation functions, please visit my this and this post.

5. We can also specify stride attribute for convolutional and pooling layers. By default it is (1,1).

6. We have flatten layer just before dense layer. Flatten layer converts the 2D matrix data to a 1D vector before building the fully connected layers.

7. After that, we use a fully connected layer with 1024 neurons.

8. Then we use a regularization layer called Dropout. It is configured to randomly exclude 20% of neurons in the layer in order to reduce overfitting. Dropout randomly switches off some neurons in the network which forces the data to find new paths. Therefore, this reduces overfitting. To know more about dropout, please visit this post.

9. We add a dense layers at the end which is used for class prediction (0–9). That is why it has 10 neurons. It is also called output layer. This layer uses softmax activation function instead of ReLU.

Step 5: Model Summary

model.summary()

Step 6: Compile the model

model.compile(Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])

We are using Adam optimizer with learning rate of 0.0001 and loss function as categorical cross entropy.

Step 7: Train the model

model.fit(x_train, y_train, validation_data=(x_test, y_test), batch_size=128, epochs=5, verbose=2)

We are using batch size of 128 and 5 epochs.

Step 8: Evaluate the model

score = model.evaluate(x_test, y_test, verbose=0)
print(score)
print('Loss:', score[0])
print('Accuracy:', score[1])

Output:
[0.050816776573400606, 0.9856] Loss: 0.050816776573400606 Accuracy: 0.9856

I got the accuracy score of about 98.56%. You can play around with different hyperparameters like learning rate, batch size, number of epochs, adding more convolutional and pooling layers, changing the number and size of filters, changing the size of strides etc.

Friday, 5 July 2019

All about Keras Framework in Deep Learning

Keras is a widely used framework to implement neural networks in deep learning. Keras is very easy to use and understand and has a large community support. Below are the points which illustrate some strengths and limitations of Keras framework:

1. High Level Framework: Keras is an open source and high level neural network framework, written in Python.

2. Supports Multiple Backends: Keras uses TensorFlow as backend by default but you can also configure it to use Theano or CNTK as backend.

3. Cross Platform and Easy Model Deployment: Keras can run on all major operating systems. Keras supports a lot of devices and platforms, so we can deploy Keras on any device like iOS with CoreML, Android with Tensorflow Android, Web browser with .js support, Cloud engine, Raspberry Pi etc.

4. Multiple CPU and GPU compatible: Keras has built-in support for data parallelism, so it can process large volumes of data and speed up the time needed to train it.

5. Easy to use and understand: Keras is easy to use and understand. You can easily implement complex neural networks with few lines of code. You don't need to understand low level details as it is already a wrapper around complex low level frameworks like TensorFlow, Theano or CNTK. So, it is a boon for beginners.

Related links
Create a simple sequential mode in Keras
Create a CNN model in Keras

6. Pre-trained models: Keras contains a lot of pre-trained neural network models for our general purpose requirements. For example, for image classification, we don't need to create a CNN model from scratch. We can fine-tune an existing and well trained model called VGG16 for this purpose. Similarly, there are a lot of other models available with Keras like InceptionV3, ResNet, MobileNet, Xception, InceptionResNetV2 etc. which we just need to fine-tune as per our needs.

Related links:
What is fine-tuning?
Fine-tuning VGG16 model

7. Great community: As mentioned earlier, Keras has a great community support. You can easily find a lot of tutorials, detailed articles on various concepts, solved examples and a lot more. Keras is also very well documented.

Limitations of Keras

As stated in point 1 and 2, Keras is only a high level API which uses other frameworks like TensorFlow, Theano and CNTK to perform low level tasks. If you want to research or write your own custom algorithm in deep learning project, you should use Tensorflow instead.

Tuesday, 2 July 2019

Building a simple sequential neural network with dense layers in Keras

Lets understand how can we create a simple neural network in Keras. We will create a simple sequential model with dense layers (fully connected layers). We will use relu as an activation function in hidden layers and softmax in outer layer and Adam as SGD.

You can download my Jupyter notebook containing below code from here.

Step 1: Import required libraries

import numpy as np
from random import randint
from sklearn.preprocessing import MinMaxScaler

import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam

from sklearn.metrics import confusion_matrix, accuracy_score, classification_report

Step 2: Create training and test dataset

We will create a hypothetical medical data and will try to predict whether a drug has any side effect or not on the people of different age groups. 

People are divided into two age groups: 
1. 13 years to 64 years and 
2. 65 years to 100 years. 

Label equal to 1 means that drug has side effect and 0 means no side effect. 

We will create 2100 training observations. One array contains age which acts as sample and other array contains 0 and 1 which acts as label.

train_samples = []
train_labels = []
  
for i in range(50):
    random_younger = randint(13,64)
    train_samples.append(random_younger)
    train_labels.append(1)
    
    random_older = randint(65,100)
    train_samples.append(random_older)
    train_labels.append(0)
    
for i in range(1000):
    random_younger = randint(13,64)
    train_samples.append(random_younger)
    train_labels.append(0)
  
    random_older = randint(65,100)
    train_samples.append(random_older)
    train_labels.append(1)
    
Convert the above lists into numpy arrays as Keras expects samples and labels in the form of numpy arrays.

train_samples = np.array(train_samples)
train_labels = np.array(train_labels)

Similarly, create a test dataset.

test_samples = []
test_labels = []
  
for i in range(10):
    random_younger = randint(13,64)
    test_samples.append(random_younger)
    test_labels.append(1)
    
    random_older = randint(65,100)
    test_samples.append(random_older)
    test_labels.append(0)
    
for i in range(200):
    random_younger = randint(13,64)
    test_samples.append(random_younger)
    test_labels.append(0)
    
    random_older = randint(65,100)
    test_samples.append(random_older)
    test_labels.append(1)
    
test_samples = np.array(test_samples)
test_labels = np.array(test_labels)

Step 3: Scale the training and test data

scaler = MinMaxScaler(feature_range=(0,1))
scaled_train_samples = scaler.fit_transform((train_samples).reshape(-1,1))
scaled_test_samples = scaler.fit_transform((test_samples).reshape(-1,1))

This is a preprocessing step. We need to scale our sample data in the range of 0 and 1. This is called feature scaling. For more details on feature scaling, you can go through my this post.

Step 4: Create a model

We will create a sequential model which is a linear stack of layers. We can create a sequential model by passing a list of layer instances to the constructor like this:

model = Sequential([
    Dense(16, input_shape=(1,), activation='relu'),
    Dense(32, activation='relu'),
    Dense(2, activation='softmax'),
])

We can also simply add layers using .add() method:

model = Sequential()
model.add(Dense(16, input_shape=(1,), activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(2, activation='softmax'))

We are using dense layers in the above Keras code which denote fully connected layers in a neural network.

For hidden layers, we are using relu activation function and for outer layer, we are using softmax activation function. To know the difference between relu and softmax activation functions, please consider my this post.

Step 5: Model Summary

model.summary()

It will show the description of all the layers and parameters.

Step 6: Compile a model

model.compile(Adam(lr=0.0001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])

We need to pass the optimizer we want to use, learning rate, loss function and metrics. We are using Adam as an optimizer. This is a variant of SGD (Stochastic Gradient Descent). There are a lot of other optimizers. To go in detail, you can consider visiting my this post.

Step 7: Train a model

model.fit(scaled_train_samples, train_labels, validation_split=0.1, batch_size=10, epochs=20, shuffle=True, verbose=2)

We need to pass training sample and label data, validation set, batch size, epochs, shuffle and verbose parameters. Validation set helps in removing the overfitting and increasing the generalization capabilities of the network. By default, shuffle is always true. These parameters are called hyperparameters and we need to tune these parameters. You can try with different batch sizes and epochs and observe the change in the results.

Step 8: Predict from the model

predictions = model.predict(scaled_test_samples, batch_size=10, verbose=0)
for i in predictions:
    print(i)

Above code will give us the predictions in form of probabilities. If we need exact predictions, we need to use following code. Instead of predict, we need to use predict_classes function.

rounded_predictions = model.predict_classes(scaled_test_samples, batch_size=10, verbose=0)
for i in rounded_predictions:
     print(i)

Step 9: Check accuracy

We are going to use confusion matrix, accuracy score and classification report to check the accuracy of our neural network.

confusionMatrix = confusion_matrix(test_labels, rounded_predictions)
accuracyScore = accuracy_score(test_labels, rounded_predictions)
classificationReport = classification_report(test_labels, rounded_predictions)
print(confusionMatrix)
print(accuracyScore * 100)
print(classificationReport)

Hyperparameter Tuning: In steps 6, 7 and 8, we are using a lot of hyperparameters. Network does not learn these parameters by itself. So, we need to tune these parameters explicitly in order to improve the performance and accuracy of the network. For more information on hyperparameters, you can go through my this post.

Related: Build a CNN model using Keras framework

Thursday, 27 June 2019

100+ Basic Deep Learning Interview Questions and Answers

I have listed down some basic deep learning interview questions with answers. These deep learning interview questions cover many concepts like perceptrons, neural networks, weights and biases, activation functions, gradient descent algorithm, CNN (ConvNets), CapsNets, RNN, LSTM, regularization techniques, dropout, hyperparameters, transfer learning, fine-tuning a model, autoencoders, deep learning frameworks like TensorFlow, Keras etc. I will keep adding more and more deep learning interview questions in this list. So, stay tuned.

Note: For Machine Learning Interview Questions, refer this link.

Introduction

1. What is Deep Learning? How is it different from machine learning? What are the pros and cons of deep learning over machine learning? Answer

2. How does deep learning mimic the behavior of human brain? How will you compare an artificial neuron to a biological neuron?

Perceptron

3. What is a Perceptron? How does it work? What is a multi-layer perceptron?

4. What are the various limitations of a Perceptron? Why cannot we implement XOR gate using Perceptron?

Answers to above questions

Neural Networks

5. What are the various layers in a neural network?

6. What are the various types of a neural network?

7. What are Deep and Shallow neural networks? What are the advantages and disadvantages of deep neural networks over shallow neural networks?

Answers to above questions

Weights and Bias

8. What is the importance of weights and biases in a neural network? What are the things to keep in mind while initializing weights and biases? Answer

9. What is Xavier Weight Initialization technique? How is it helpful in initializing the weights? How does weight initialization vary for different types of activation functions? Answer 

10. Explain forward and backward propagation in a neural network. How does a neural network update weights and biases during back propagation? (See Gradient Descent section for answer)

Activation Functions

11. What do you mean by activation functions in neural networks? Why do we call them squashing functions? How do activation functions bring non-linearity in neural networks?

12. Explain various activation functions like Step (Threshold)Logistic (Sigmoid), Hyperbolic Tangent (Tanh), and ReLU (Rectified Linear Unit)What are the various advantages and disadvantages of using these activation functions? 

Answers to above questions

13. Dying and Leaky ReLU: What do you mean by Dying ReLU? When a neuron is considered as dead in a neural network? How does leaky ReLU help in dealing with dying ReLU? Answer

14. What is the difference between Sigmoid and Softmax activation functions? Answer

Batches

15. Explain the terms: EpochsBatches and Iterations in neural networks.

16. What do you mean by Batch Normalization? What are its various advantages? Answer

Loss Function

17. What is the difference between categorical_crossentropy and sparse_categorical_crossentropy? Which one to use and when?

Hint: For one hot encoded labels, use categorical_crossentropy. Otherwise, use sparse_categorical_crossentropy.

Gradient Descent

18. What is Gradient Descent? How is it helpful in minimizing the loss function? What are its various types? 

19. Explain Batch, Stochastic, and Mini Batch Gradient Descent. What are the advantages and disadvantages of these Gradient Descent methods? Answer

20. Explain these terms in context of SGD: Momentum, Nesterov Momentum, AdaGrad, AdaDelta, RMSprop, Adam. Answer

21. What is the difference between Local and Global Minima? What are the ways to avoid local minima? Answer

22. Explain Vanishing and Exploding Gradients.

23. What is Learning Rate? How does low and high learning rate affect the performance and accuracy of a neural network? Answer

24. If loss in a neural network is not decreasing during training period after so many iterations, what could be the possible reasons?

Hint: Think of low / high learning rate, local and global minima (may be it stuck at local minima), high regularization parameter etc.

CNN (ConvNets)

25. What is Convolutional Neural Network? Explain various layers in a CNN? 

26. What are the Filters (Kernels) in CNN? What is Stride?

27. What do you mean by Padding in CNN? What is the difference between Zero Padding and Valid Padding?

28. What do you mean by Pooling in CNN? What are the various types of pooling? Explain Max Pooling, Min Pooling, Average Pooling and Sum Pooling.

29. What are the various hyperparameters in CNN which need to be tuned while training process?

30. How is CNN different from traditional fully connected neural networks? Why we cannot use fully connected neural networks for image recognition?

31. Suppose we have an input of n X n dimension and filter of f X f dimension. If we slide this filter over the input in the convolutional layer, what will be the dimension of the resulting output?

Answers to above questions

CapsNets

32. What is Capsule Neural Network (CapsNets)? How is it different from CNN (ConvNets)? Answer

Computer Vision

33. What is computer vision? How does deep learning help in solving various computer vision problems? Answer

RNN

34. Explain RNN (Recurrent Neural Network). Why is RNN best suited for sequential data?

35. What do you mean by feedback loop in RNN?

36. What are the various types of RNN? Explain with example: One to One, One to Many, Many to One, and Many to Many RNN.

37. What is Bidirectional RNN?

38. What are the various issues with RNN? Explain Vanishing and Exploding Gradients. What are the various ways to solve these gradient issues in RNN?

39. What are the various advantages and disadvantages of RNN?

40. What are the various applications of RNN?

41. What are the differences between CNN and RNN?

LSTM

42. How does LSTM (Long Short Term Memory) solve Vanishing Gradient issue in RNN?

43. What are the gated cells in LSTM? What are the various types of gates used in LSTM?

44. What are the various applications of LSTM?

Answers to all questions of RNN and LSTM

Regularization

45. What are the main causes of overfitting and underfitting in a neural network?

46. What are the various regularization techniques used in a neural network?

47. Explain L1 and L2 Regularization techniques used in a neural network.

48. What is Dropout? How does it prevent overfitting in a neural network? What are its various advantages and disadvantages? Answer

49. What is Data AugmentationHow does it prevent overfitting in a neural network?

50. What is Early Stopping? How does it prevent overfitting in a neural network?

Answers to above questions

Learnable Parameters and Hyperparameters

51. What are the learnable parameters in a neural network? Explain with an example.

52. What are the various hyperparameters used in a neural network? What are the various ways to optimize these hyper-parameters?

Answers to above questions

53. How will you manually calculate number of weights and biases in a fully connected neural network? Explain with an example. YouTube video

54. How will you manually calculate number of weights and biases in a convolutional neural network (CNN)? Explain with an example. YouTube video

Transfer Learning

55. What do you mean by Transfer Learning and Fine-tuning a model? What are its various advantages? What are the various steps to fine-tune a model? Answer

Autoencoders

56. What are Autoencoders? What are the various components of an autoencoder? Explain encoder, decoder and bottleneckHow does an autoencoder work?

57. What do you mean by latent space representation and reconstruction loss in an autoencoder?

58. What are the various properties of an autoencoder?

59. What are the various types of an autoencoder? Explain Undercomplete autoencoder, Sparse autoencoder, Denoising autoencoder, Convolutional autoencoder, Contractive autoencoders and Deep autoencoders.

60. How do we add regularization capabilities to autoencoders?

61. What are the various applications of an autoencoder?

62. What are the various hyperparameters we need to tune in an autoencoder?

63. How will you compare Autoencoders with PCA (Principal Component Analysis)?

64. What is RBM (Restricted Boltzman Machine)? What is the difference between an Autoencoder and RBM?

Answers to above questions

Frameworks

65. What are the various frameworks available to implement deep learning models? What should be the characteristics of an ideal deep learning framework? Answer

TensorFlow

66. Explain TensorFlow architecture.

67. What is a Tensor? Explain Tensor Datatypes and Ranks.

68. What are Constants, Placeholders and Variables in a TensorFlow? Why do we need to initialize variables explicitly?

69. What is a Computational Graph? What are the nodes and edges in it? How to build and run the graph using session? What are its various advantages?

70. What is a Tensor Board? How is it useful?

71. What is a TensorFlow Pipeline? How is it useful?

72. Explain these terms: Feed Dictionary and Estimators

Answers to above questions

73. Write a sample code to demonstrate constants, placeholders and variables in TensorFlow? Answer

74. Write a sample code using TensorFlow to demonstrate gradient descent? Answer

75. Implement a Linear Classification Model using TensorFlow Estimator. Answer

Keras

76. What do you know about Keras framework? What are its various advantages and limitations? Answer

77. How will you build a basic sequential model using Keras? Answer

78. How will you build a basic CNN model using Keras? Answer 

79. How will you build a basic LSTM model using Keras?

80. What are the various pre-trained models available in Keras? How are these pre-trained models useful for us?

81. How will you fine-tune VGG16 model for image classification? Answer

82. How will you fine-tune MobileNet model for image classification? What is the difference between VGG16 and MobileNet model?

Some of the above questions don't have answers by now. I am still writing answers for these questions and will keep this list updated. Although above list does not contain 100+ questions as claimed in the title of the post, but very soon I will take the count beyond 100.