Member-only story

why one model isn’t enough ?

Sirine Amrane
3 min read2 days ago

--

when you’re working in ml or dl, you quickly develop an obsession: boosting performance. you refine your features, tweak your hyperparameters, search for the best algorithm… but at some point, you hit a wall. this is where ensemble learning comes into play.

instead of betting on a single model, we combine multiple models to capture as many nuances in the data as possible. and trust me, when done right, it’s a game changer.

why ensemble learning works

you’ve probably heard of the “wisdom of the crowds”: individually, we may be wrong, but by aggregating several independent opinions, we often get a more accurate result. in ml, it’s the same.

a single model is biased by its systematic errors. but if we combine multiple models with different perspectives, these errors tend to cancel out. variance is reduced, generalization improves.

the three main approaches explained in detail

bagging (bootstrap aggregating): the idea here is to reduce variance by training multiple models on different subsamples of the training data. each model is trained independently, then their predictions are aggregated (by averaging or majority voting). the classic example is random forest, where multiple decision trees are trained on random data…

--

--

Sirine Amrane
Sirine Amrane

No responses yet