Ensemble
Boosting: In lieu of training all the models separately as in bagging, boosting trains models sequentially. Each new model is trained to correct the errors made by the previous ones. The first tree is examined and the weights of those observations that are hard to classify are increased and the weights for those that are easy to classify are reduced. This modified data is used to build the next tree. This process is repeated for a defined number of iterations. Predictions of the final ensemble model is therefore the weighted sum of the predictions made by the previous tree models. GBM uses loss functions Since each tree is fit to residuals as against the original output parameter, each tree is small and improves prediction in the parts where prediction is bad. All the models might make the same mistake in the standard ensemble method. Compute error by deducting forecasted value from target value (e1= y - y1_forecasted) Build a new model on errors (e1_forecasted) as target variabl...