Pages

Monday 18 February 2019

Similarities and Differences between Bagging and Boosting in Machine Learning

Bagging stands for Bootstrap Aggregation. Bagging and Boosting are Ensemble Learning Techniques and help in improving the performance of an algorithm by handling bias variance trade-off

I have summed up the similarities and differences between bagging and boosting.

Similarities between Bagging and Boosting:

1. Both Bagging and Boosting are Ensemble Learning Techniques.

2. Both Bagging and Boosting algorithms combine the outputs of "Weak Learners" to make accurate predictions.

3. Both Bagging and Boosting algorithms help in dealing with bias variance trade-off. 

4. Both Bagging and Boosting can be used to solve classification as well as regression problems.

Differences between Bagging and Boosting

1. Bagging only controls high variance in a model while boosting controls both bias and variance. So, boosting is considered to be more effective.

2. In bagging, each weak learner has equal say in the final decision while in boosting the weak learner which generates high accuracy has more say in the final decision. 

In other words, in bagging each weak learner's vote carries equal weight (just like in our democracy), but in boosting, the weak learner who makes good predictions as compared to other, its vote is given higher weight. 

3. In bagging, all weak learners are independent of each other. The sequence in which each weak learner is created does not matter. But in boosting, the sequence of creation of each weak learner does matter. 

Boosting is a sequential technique. Boosting builds models from weak learners in an iterative way. At any instant t, the model outcomes are weighed based on the outcomes of previous instant t-1. The outcomes predicted correctly are given a lower weight and the ones miss-classified are weighted higher. 

So, in boosting, we go sequentially by putting more weight on instances with wrong predictions and high errors. The general idea behind this is that instances which are hard to predict correctly will be focused on during learning, so that the model learns from past mistakes. 

When we train each ensemble on a subset of the training set, we also call this Stochastic Gradient Boosting, which improves generalization capability of the model.

4. Examples: 

Bagging: Random Forest
Boosting: AdaBoost, GBM (Gradient Boosting Machine), XGBoost (Extreme Grandient Boosting)

No comments:

Post a Comment

About the Author

I have more than 10 years of experience in IT industry. Linkedin Profile

I am currently messing up with neural networks in deep learning. I am learning Python, TensorFlow and Keras.

Author: I am an author of a book on deep learning.

Quiz: I run an online quiz on machine learning and deep learning.