Skip to content

Commit bc81463

Browse files
CharuChhimpanorvig
authored andcommitted
Added ensemble learner (#884)
* Added ensemble learner in learning.ipynb * Added ensemble_learner.jpg * Update learning.ipynb * Update learning.ipynb
1 parent ab2377b commit bc81463

File tree

2 files changed

+44
-0
lines changed

2 files changed

+44
-0
lines changed

images/ensemble_learner.jpg

16.2 KB
Loading

learning.ipynb

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1716,6 +1716,50 @@
17161716
"The correct output is 0, which means the item belongs in the first class, \"setosa\". Note that the Perceptron algorithm is not perfect and may produce false classifications."
17171717
]
17181718
},
1719+
{
1720+
"cell_type": "markdown",
1721+
"metadata": {},
1722+
"source": [
1723+
"## ENSEMBLE LEARNER\n",
1724+
"\n",
1725+
"### Overview\n",
1726+
"\n",
1727+
"Ensemble Learning improves the performance of our model by combining several learners. It improvise the stability and predictive power of the model. Ensemble methods are meta-algorithms that combine several machine learning techniques into one predictive model in order to decrease variance, bias, or improve predictions. \n",
1728+
"\n",
1729+
"\n",
1730+
"\n",
1731+
"![ensemble_learner.jpg](images/ensemble_learner.jpg)\n",
1732+
"\n",
1733+
"\n",
1734+
"Some commonly used Ensemble Learning techniques are : \n",
1735+
"\n",
1736+
"1. Bagging : Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. It helps us to reduce variance error.\n",
1737+
"\n",
1738+
"2. Boosting : Boosting is an iterative technique which adjust the weight of an observation based on the last classification. If an observation was classified incorrectly, it tries to increase the weight of this observation and vice versa. It helps us to reduce bias error.\n",
1739+
"\n",
1740+
"3. Stacking : This is a very interesting way of combining models. Here we use a learner to combine output from different learners. It can either decrease bias or variance error depending on the learners we use.\n",
1741+
"\n",
1742+
"### Implementation\n",
1743+
"\n",
1744+
"Below mentioned is the implementation of Ensemble Learner."
1745+
]
1746+
},
1747+
{
1748+
"cell_type": "code",
1749+
"execution_count": null,
1750+
"metadata": {},
1751+
"outputs": [],
1752+
"source": [
1753+
"psource(EnsembleLearner)"
1754+
]
1755+
},
1756+
{
1757+
"cell_type": "markdown",
1758+
"metadata": {},
1759+
"source": [
1760+
"This algorithm takes input as a list of learning algorithms, have them vote and then finally returns the predicted result."
1761+
]
1762+
},
17191763
{
17201764
"cell_type": "markdown",
17211765
"metadata": {},

0 commit comments

Comments
 (0)