Ensemble
ECOC(Error Correcting Output Codes)
Multiclass classification
OVR(One vs Rest): Red VS [Blue, Green] - Logistic regression
OVO (One vs One): Red VS Blue - SVM
ECOC: Random code assignment
In a multiclass classification, we need to encode a categorical variable into several dummy variables. There are three ways to encoding, and ECOC is introduced as the way for adding randomness.
Random code assignment worked as well as the optimally constructed error-correcting codes. The idea is that the redundant "error correcting" bits allow for some inaccuracies, and can improve performance.

Learn a separate classifier for each of the
At a test point , is the predicted probability of a one for the response.
Boosting and Regularization Paths
Forward-stagewise(FS) regression
Forward-stagewise regression is a constrained version of forward-stepwise regression. At each step, this algorithm identifies the most correlated variable with the current residual. You can check the specific algorithm in page 5 in the research below.
This research shows the algorithm for forward-stagewise regression.
Generalized Additive Models
Usually effects are often not linear, might be a non-linear function.
Tree
Penalized regression
Penalty approach
is the dictionary of all possible terminal node regression trees. can be realized on the training data as basis functions in . Also, all coefficients for each tree are estimated by least squares.
However, when the number of basis functions become so large, solving the optimization problem with the lasso penalty is not possible. In this case, following forward stagewise approach can be used by approximation of the lasso penalty approach.
Forward Stagewise approach(FS)
Initialization
This algorithm leverages the forward-stagewise algorithm. As like the regression, it adopts the tree most correlated with the current residual. The iteration number is inversely related to , when the forward stagewise algorithm is on the initialization stage that every coefficients are zero.()
and also . When , the regularization term disappears, so the solution becomes a least square solution.

When all of the basis functions are mutually uncorrelated, FS shows exactly the same solution with lasso for bound parameter ( Even if these functions are not uncorrelated, when is a monotone function of , FS also becomes same with lasso regression.) The regularization term is inversely proportional to the Lagrange constant . in the right panel.
"Bet on Sparsity" Principle
Usually, the shrinkage with penalty is better suited to sparse situations, where there are few basis functions with nonzero coefficients.
Dense Scenario: 10,000 data points and a linear combination of million trees with coefficients from a Gaussian distribution. → Ridge works better than Lasso

Sparse Scenario: 10,000 data points and a linear combination of 1,000 tress with nonzero coefficients. → Lasso works well.
The degree of sparseness is determined by the true target function and the chosen dictionary . Noise-to-signal ratio(NSR) can also be a key in determining the sparseness. Because larger training sets allow the estimation for coefficients with smaller standard errors. When the size of dictionary becomes increased, it leads to a sparser representation for our function and higher variance in searching.

In this example, NSR is defined as . The nominator is the variance of (unexplained part), and the denominator is the variance of our model. The bigger NSR is, the bigger the rate of unexplainable error is. It has been known as that lasso works well for sparse setting.
>> Reference <<
Regularization Paths, Over-fitting and Margins

Lasso suffers somewhat from the multi-collinearity problem(you can check the reason in Ex. 3.28). Because in the exercise 3.28, when lasso has the exact same copy , the coefficients for become which is the a half of the original coefficient .
Monotone version of the lasso
>> Reference <<
We create an expanded data matrix
In this setting,

The monotone lasso coefficient path for a dataset is the solution to the different equation . The is standardized to have unit norm, which is the arc length in the monotone lasso situation.
The margin of a fitted model is defined as
Learning Ensembles
The ensemble can be divided into two steps.
A finite dictionary is induced from the training data.
These step are seen as a way of post-processing boosting or random forests, because in a first step is already fitted by the gradient boosting or random forest.
The measure of (lack of) relevance using loss function is evaluated on the training data by this formula. . , which is the global minimizer so
Too narrow suggests too many of the look alike, and similar to
Too wide suggests a large spread in the , but possibly consisting of many irrelevant cases.
refers to a subsample of of the training observations, typically without replacement.
is the memory into the randomization process; the larger , the more the procedure avoids similar to these found before.
Randomization schemes we already deal with are special cases of ISLE Ensemble Generation
Bagging: Sampling with replacement has . Sampling without replacement with is equal to sampling without replacement with , but former one is much more efficient.
Random Forest: More randomness is introduced by the selection of the splitting variable. Smaller is, smaller is in random forests.
Gradient Boosting: With shrinkage uses , but typically doesn't produce sufficient width
Stochastic Gradient Boosting
Importance Sampled Learning Ensemble(ISLE) is recommended with
sparse versus dense can be calculated with the noise-to-signal ratio(NSR)
Revisit the Gradient Tree Boosting Algorithm
In Boosting tree, one tree is formally expressed as with parameter . This parameter can be found by solving
The parameter
Last updated
Was this helpful?