Ensemble
ECOC(Error Correcting Output Codes)
Multiclass classification
OVR(One vs Rest): Red VS [Blue, Green] - Logistic regression
OVO (One vs One): Red VS Blue - SVM
ECOC: Random code assignment
In a multiclass classification, we need to encode a categorical variable into several dummy variables. There are three ways to encoding, and ECOC is introduced as the way for adding randomness.
Random code assignment worked as well as the optimally constructed error-correcting codes. The idea is that the redundant "error correcting" bits allow for some inaccuracies, and can improve performance.

Learn a separate classifier for each of the L=15
At a test point x, p^l(x)is the predicted probability of a one for the lth response.
δk(x)=∑l=1L∣Ckl−p^l(x)∣
Boosting and Regularization Paths
Forward-stagewise(FS) regression
Forward-stagewise regression is a constrained version of forward-stepwise regression. At each step, this algorithm identifies the most correlated variable with the current residual. You can check the specific algorithm in page 5 in the research below.
This research shows the algorithm for forward-stagewise regression.
Generalized Additive Models
Usually effects are often not linear, f might be a non-linear function.
Tree
Penalized regression
Penalty approach
T={Tk} is the dictionary of all possible Jterminal node regression trees. T can be realized on the training data as basis functions in Rp. Also, all coefficients for each tree are estimated by least squares.
However, when the number of basis functions become so large, solving the optimization problem with the lasso penalty is not possible. In this case, following forward stagewise approach can be used by approximation of the lasso penalty approach.
Forward Stagewise approach(FS)
Initialization α^k=0,k=1,...,K.
(a)(β∗,k∗)=argminβ,k∑i=1N(yi−∑l=1Kα^lTl(xi)−βTk(xi))2(b)α^k∗←α^k∗+ε⋅sign(β∗)
fM(x)=∑k=1Kα^kTk(x)
This algorithm leverages the forward-stagewise algorithm. As like the regression, it adopts the tree most correlated with the current residual. The iteration number M is inversely related to λ, when λ=∞ the forward stagewise algorithm is on the initialization stage that every coefficients are zero.(M=0)
∣α^k(λ)∣<∣α^k(0)∣ and also ∣α^k(M)∣<∣α^k(0)∣. When λ=0, the regularization term disappears, so the solution becomes a least square solution.

When all of the basis functions Tk are mutually uncorrelated, FS shows exactly the same solution with lasso for bound parameter t=∑k∣αk∣ ( Even if these functions are not uncorrelated, when α^k(λ) is a monotone function of λ, FS also becomes same with lasso regression.) The regularization term λ is inversely proportional to the Lagrange constant t. M=250,ε=0.01 in the right panel.
"Bet on Sparsity" Principle
Usually, the shrinkage with L1penalty is better suited to sparse situations, where there are few basis functions with nonzero coefficients.
Dense Scenario: 10,000 data points and a linear combination of million trees with coefficients from a Gaussian distribution. → Ridge works better than Lasso

Sparse Scenario: 10,000 data points and a linear combination of 1,000 tress with nonzero coefficients. → Lasso works well.
The degree of sparseness is determined by the true target function and the chosen dictionary T. Noise-to-signal ratio(NSR) can also be a key in determining the sparseness. Because larger training sets allow the estimation for coefficients with smaller standard errors. When the size of dictionary becomes increased, it leads to a sparser representation for our function and higher variance in searching.

In this example, NSR is defined as Var(Y∣η(X))/Var(η(X)). The nominator is the variance of Y(unexplained part), and the denominator is the variance of our model. The bigger NSR is, the bigger the rate of unexplainable error is. It has been known as that lasso works well for sparse setting.
>> Reference <<
Regularization Paths, Over-fitting and Margins

Lasso suffers somewhat from the multi-collinearity problem(you can check the reason in Ex. 3.28). Because in the exercise 3.28, when lasso has the exact same copy Xj∗=Xj, the coefficients for Xj∗,Xj become a/2,a/2 which is the a half of the original coefficient a.
Monotone version of the lasso
>> Reference <<
We create an expanded data matrix X~=[X:−X]
In this setting,

The monotone lasso coefficient path β(l) for a dataset X~={X,−X} is the solution to the different equation ∂l∂β=ρml(β(l)). The ρml(β) is standardized to have unit L1 norm, which is the L1 arc length in the monotone lasso situation.
The margin of a fitted model f(x)=∑kαkTk(x) is defined as
Learning Ensembles
The ensemble can be divided into two steps.
A finite dictionary TL={T1(X),...,TM(x)}is induced from the training data.
α(λ)=argminα∑i=1NL[yi,α0+∑m=1MαmTm(xi)]+λ∑m=1M∣αm∣
These step are seen as a way of post-processing boosting or random forests, because in a first step TL is already fitted by the gradient boosting or random forest.
The measure of (lack of) relevance using loss function is evaluated on the training data by this formula. Q(γ)=minc0,c1∑i=1NL(yi,c0+c1b(xi;γ)). γ∗=argminγ∈ΓQ(γ), which is the global minimizer so Q(γ)≥Q(γ∗)
Too narrow σ suggests too many of the b(x;γm) look alike, and similar to b(x;γ∗)
Too wide σ suggests a large spread in the b(x;γm), but possibly consisting of many irrelevant cases.
ISLE Ensemble Generation
1. f0(x)=argminc∑i=1NL(yi,c)
2. For m=1 to M do
(a)γm=argminγ∑i∈Sm(η)L(yi,fm−1(xi)+b(xi;r))(b)fm(x)=fm−1(x)+νb(x;γm)
3. TISLE={b(x;γ1),b(x;γ2),...,b(x;γM)}
Sm(η) refers to a subsample of N×η(η∈(0,1]) of the training observations, typically without replacement.
ν∈[0,1] is the memory into the randomization process; the larger ν, the more the procedure avoids b(x;γ) similar to these found before.
Randomization schemes we already deal with are special cases of ISLE Ensemble Generation
Bagging: Sampling with replacement has η=1,ν=0. Sampling without replacement with η=1/2 is equal to sampling without replacement with η=1, but former one is much more efficient.
Random Forest: More randomness is introduced by the selection of the splitting variable. Smaller η<1/2 is, smaller m is in random forests.
Gradient Boosting: With shrinkage uses η=1, but typically doesn't produce sufficient width σ
Stochastic Gradient Boosting
Importance Sampled Learning Ensemble(ISLE) is recommended with ν=0.1,η≤1/2
sparse versus dense can be calculated with the noise-to-signal ratio(NSR)
Revisit the Gradient Tree Boosting Algorithm
In Boosting tree, one tree is formally expressed as T(x;Θ)=∑j=1JγjI(x∈Rj) with parameter Θ={Rj,γj}1J. This parameter can be found by solving Θ^=argminΘ∑j=1J∑xi∈RjL(yi,γj)
The parameter
Last updated
