OVR(One vs Rest): Red VS [Blue, Green] - Logistic regression
OVO (One vs One): Red VS Blue - SVM
ECOC: Random code assignment
In a multiclass classification, we need to encode a categorical variable into several dummy variables. There are three ways to encoding, and ECOC is introduced as the way for adding randomness.
Random code assignment worked as well as the optimally constructed error-correcting codes. The idea is that the redundant "error correcting" bits allow for some inaccuracies, and can improve performance.
Learn a separate classifier for each of the L=15
At a test point x, p^βlβ(x)is the predicted probability of a one for the lthβ response.
Ξ΄kβ(x)=βl=1Lββ£Cklββp^βlβ(x)β£
Boosting and Regularization Paths
Forward-stagewise(FS) regression
Forward-stagewise regression is a constrained version of forward-stepwise regression. At each step, this algorithm identifies the most correlated variable with the current residual. You can check the specific algorithm in page 5 in the research below.
This research shows the algorithm for forward-stagewise regression.
Generalized Additive Models
Usually effects are often not linear, f might be a non-linear function.
T={Tkβ} is the dictionary of all possible Jterminal node regression trees. T can be realized on the training data as basis functions in Rp. Also, all coefficients for each tree are estimated by least squares.
However, when the number of basis functions become so large, solving the optimization problem with the lasso penalty is not possible. In this case, following forward stagewise approach can be used by approximation of the lasso penalty approach.
This algorithm leverages the forward-stagewise algorithm. As like the regression, it adopts the tree most correlated with the current residual. The iteration number M is inversely related to Ξ», when Ξ»=β the forward stagewise algorithm is on the initialization stage that every coefficients are zero.(M=0)
β£Ξ±^kβ(Ξ»)β£<β£Ξ±^kβ(0)β£ and also β£Ξ±^kβ(M)β£<β£Ξ±^kβ(0)β£. When Ξ»=0, the regularization term disappears, so the solution becomes a least square solution.
When all of the basis functions Tkβ are mutually uncorrelated, FS shows exactly the same solution with lasso for bound parameter t=βkββ£Ξ±kββ£ ( Even if these functions are not uncorrelated, when Ξ±^kβ(Ξ») is a monotone function of Ξ», FS also becomes same with lasso regression.) The regularization term Ξ» is inversely proportional to the Lagrange constant t. M=250,Ξ΅=0.01 in the right panel.
"Bet on Sparsity" Principle
Usually, the shrinkage with L1βpenalty is better suited to sparse situations, where there are few basis functions with nonzero coefficients.
Dense Scenario: 10,000 data points and a linear combination of million trees with coefficients from a Gaussian distribution. β Ridge works better than Lasso
Sparse Scenario: 10,000 data points and a linear combination of 1,000 tress with nonzero coefficients. β Lasso works well.
The degree of sparseness is determined by the true target function and the chosen dictionary T. Noise-to-signal ratio(NSR) can also be a key in determining the sparseness. Because larger training sets allow the estimation for coefficients with smaller standard errors. When the size of dictionary becomes increased, it leads to a sparser representation for our function and higher variance in searching.
In this example, NSR is defined as Var(Yβ£Ξ·(X))/Var(Ξ·(X)). The nominator is the variance of Y(unexplained part), and the denominator is the variance of our model. The bigger NSR is, the bigger the rate of unexplainable error is. It has been known as that lasso works well for sparse setting.
Lasso suffers somewhat from the multi-collinearity problem(you can check the reason in Ex. 3.28). Because in the exercise 3.28, when lasso has the exact same copy Xjββ=Xjβ, the coefficients for Xjββ,Xjβ become a/2,a/2 which is the a half of the original coefficient a.
The monotone lasso coefficient path Ξ²(l) for a dataset X~={X,βX} is the solution to the different equation βlβΞ²β=Οmlβ(Ξ²(l)). The Οmlβ(Ξ²) is standardized to have unit L1β norm, which is the L1β arc length in the monotone lasso situation.
The margin of a fitted model f(x)=βkβΞ±kβTkβ(x) is defined as
These step are seen as a way of post-processing boosting or random forests, because in a first step TLβ is already fitted by the gradient boosting or random forest.
The measure of (lack of) relevance using loss function is evaluated on the training data by this formula. Q(Ξ³)=minc0β,c1βββi=1NβL(yiβ,c0β+c1βb(xiβ;Ξ³)). Ξ³β=argminΞ³βΞβQ(Ξ³), which is the global minimizer so Q(Ξ³)β₯Q(Ξ³β)
Ο=ESβ[Q(Ξ³)βQ(Ξ³β)]
Too narrow Ο suggests too many of the b(x;Ξ³mβ) look alike, and similar to b(x;Ξ³β)
Too wide Ο suggests a large spread in the b(x;Ξ³mβ), but possibly consisting of many irrelevant cases.
Smβ(Ξ·) refers to a subsample of NΓΞ·(Ξ·β(0,1]) of the training observations, typically without replacement.
Ξ½β[0,1] is the memory into the randomization process; the larger Ξ½, the more the procedure avoids b(x;Ξ³) similar to these found before.
Randomization schemes we already deal with are special cases of ISLE Ensemble Generation
Bagging: Sampling with replacement has Ξ·=1,Ξ½=0. Sampling without replacement with Ξ·=1/2 is equal to sampling without replacement with Ξ·=1, but former one is much more efficient.
Random Forest: More randomness is introduced by the selection of the splitting variable. Smaller Ξ·<1/2 is, smaller m is in random forests.
Gradient Boosting: With shrinkage uses Ξ·=1, but typically doesn't produce sufficient width Ο
Stochastic Gradient Boosting
Importance Sampled Learning Ensemble(ISLE) is recommended with Ξ½=0.1,Ξ·β€1/2
sparse versus dense can be calculated with the noise-to-signal ratio(NSR)
Revisit the Gradient Tree Boosting Algorithm
In Boosting tree, one tree is formally expressed as T(x;Ξ)=βj=1JβΞ³jβI(xβRjβ) with parameter Ξ={Rjβ,Ξ³jβ}1Jβ. This parameter can be found by solving Ξ^=argminΞββj=1JββxiββRjββL(yiβ,Ξ³jβ)