OVR(One vs Rest): Red VS [Blue, Green] - Logistic regression
OVO (One vs One): Red VS Blue - SVM
ECOC: Random code assignment
In a multiclass classification, we need to encode a categorical variable into several dummy variables. There are three ways to encoding, and ECOC is introduced as the way for adding randomness.
Random code assignment worked as well as the optimally constructed error-correcting codes. The idea is that the redundant "error correcting" bits allow for some inaccuracies, and can improve performance.
Learn a separate classifier for each of the L=15
At a test point x, p^ālā(x)is the predicted probability of a one for the lthā response.
Ī“kā(x)=āl=1Lāā£Cklāāp^ālā(x)ā£
Boosting and Regularization Paths
Forward-stagewise(FS) regression
Forward-stagewise regression is a constrained version of forward-stepwise regression. At each step, this algorithm identifies the most correlated variable with the current residual. You can check the specific algorithm in page 5 in the research below.
This research shows the algorithm for forward-stagewise regression.
Generalized Additive Models
Usually effects are often not linear, f might be a non-linear function.
T={Tkā} is the dictionary of all possible Jterminal node regression trees. T can be realized on the training data as basis functions in Rp. Also, all coefficients for each tree are estimated by least squares.
However, when the number of basis functions become so large, solving the optimization problem with the lasso penalty is not possible. In this case, following forward stagewise approach can be used by approximation of the lasso penalty approach.
This algorithm leverages the forward-stagewise algorithm. As like the regression, it adopts the tree most correlated with the current residual. The iteration number M is inversely related to Ī», when Ī»=ā the forward stagewise algorithm is on the initialization stage that every coefficients are zero.(M=0)
ā£Ī±^kā(Ī»)ā£<ā£Ī±^kā(0)⣠and also ā£Ī±^kā(M)ā£<ā£Ī±^kā(0)ā£. When Ī»=0, the regularization term disappears, so the solution becomes a least square solution.
When all of the basis functions Tkā are mutually uncorrelated, FS shows exactly the same solution with lasso for bound parameter t=ākāā£Ī±kā⣠( Even if these functions are not uncorrelated, when α^kā(Ī») is a monotone function of Ī», FS also becomes same with lasso regression.) The regularization term Ī» is inversely proportional to the Lagrange constant t. M=250,ε=0.01 in the right panel.
"Bet on Sparsity" Principle
Usually, the shrinkage with L1āpenalty is better suited to sparse situations, where there are few basis functions with nonzero coefficients.
Dense Scenario: 10,000 data points and a linear combination of million trees with coefficients from a Gaussian distribution. ā Ridge works better than Lasso
Sparse Scenario: 10,000 data points and a linear combination of 1,000 tress with nonzero coefficients. ā Lasso works well.
The degree of sparseness is determined by the true target function and the chosen dictionary T. Noise-to-signal ratio(NSR) can also be a key in determining the sparseness. Because larger training sets allow the estimation for coefficients with smaller standard errors. When the size of dictionary becomes increased, it leads to a sparser representation for our function and higher variance in searching.
In this example, NSR is defined as Var(Yā£Ī·(X))/Var(Ī·(X)). The nominator is the variance of Y(unexplained part), and the denominator is the variance of our model. The bigger NSR is, the bigger the rate of unexplainable error is. It has been known as that lasso works well for sparse setting.
Lasso suffers somewhat from the multi-collinearity problem(you can check the reason in Ex. 3.28). Because in the exercise 3.28, when lasso has the exact same copy Xjāā=Xjā, the coefficients for Xjāā,Xjā become a/2,a/2 which is the a half of the original coefficient a.
Hastie & Taylor & Tibshirani &Walther, Forward stagewise regression and the monotone lasso, Electronic Journal of Statistics Vol1. (2007) 1-29
The monotone lasso coefficient path β(l) for a dataset X~={X,āX} is the solution to the different equation ālāβā=Ļmlā(β(l)). The Ļmlā(β) is standardized to have unit L1ā norm, which is the L1ā arc length in the monotone lasso situation.
The margin of a fitted model f(x)=ākāαkāTkā(x) is defined as
These step are seen as a way of post-processing boosting or random forests, because in a first step TLā is already fitted by the gradient boosting or random forest.
The measure of (lack of) relevance using loss function is evaluated on the training data by this formula. Q(γ)=minc0ā,c1āāāi=1NāL(yiā,c0ā+c1āb(xiā;γ)). γā=argminγāĪāQ(γ), which is the global minimizer so Q(γ)ā„Q(γā)
Ļ=ESā[Q(γ)āQ(γā)]
Too narrow Ļ suggests too many of the b(x;γmā) look alike, and similar to b(x;γā)
Too wide Ļ suggests a large spread in the b(x;γmā), but possibly consisting of many irrelevant cases.
Smā(Ī·) refers to a subsample of NĆĪ·(Ī·ā(0,1]) of the training observations, typically without replacement.
νā[0,1] is the memory into the randomization process; the larger ν, the more the procedure avoids b(x;γ) similar to these found before.
Randomization schemes we already deal with are special cases of ISLE Ensemble Generation
Bagging: Sampling with replacement has η=1,ν=0. Sampling without replacement with η=1/2 is equal to sampling without replacement with η=1, but former one is much more efficient.
Random Forest: More randomness is introduced by the selection of the splitting variable. Smaller Ī·<1/2 is, smaller m is in random forests.
Gradient Boosting: With shrinkage uses Ī·=1, but typically doesn't produce sufficient width Ļ
Stochastic Gradient Boosting
Importance Sampled Learning Ensemble(ISLE) is recommended with ν=0.1,Ī·ā¤1/2
sparse versus dense can be calculated with the noise-to-signal ratio(NSR)
Revisit the Gradient Tree Boosting Algorithm
In Boosting tree, one tree is formally expressed as T(x;Ī)=āj=1JāγjāI(xāRjā) with parameter Ī={Rjā,γjā}1Jā. This parameter can be found by solving Ī^=argminĪāāj=1JāāxiāāRjāāL(yiā,γjā)