Ensemble-Based
Last updated
Last updated
Background
Recommender system is the generalization of a classification algorithm. The difference is explained by this figure.
The only difference is that missing entries can occur in any column. Nevertheless, the bias-variance structure still remain.
For reference, to apply a classification algorithm into recommender system, two steps are needed. First, one item has to be regarded as a target variable, and other items as independent variables. Second, the target variable should move from first column to last column, the algorithms work.
How to work
When one can access different data sources, one can make a more robust inference by combining several models together.
Ensemble design: is the prediction matrix of users for items by the algorithm, where . The final result can be a weighted average of . In some sequential ensemble algorithm, may depend on the previous output . In other cases, outputs may not be directly combined. One output can also be used as an input to the next output.
Monolithic design: An integrated recommendation algorithm is created by using various data types.
Mixed system: TV program is a composite entity containing multiple items, so the combination of the items creates the recommendation.
Key concept: You can mix the concepts of choosing optimal weight and linear regression!! π₯
is the completely specified ratings matrices, in which the unobserved entries are predicted by different algorithms. To determine the value of , we need to evaluate the metric like MSE of MAE.
In this expression, means the hold out user-item matrix, which could be thought as a validation or test set. For the case of MSE, we can regard as independent variables and becomes a target variable. Using linear regression approach, we can solve this problem and get optimal (coefficients)
In short, you solve linear regression problem with coefficient
!!
Usually, the cross-validation method is also used.
However, the method of sum of squared is sensitive to presence of noise and outlier, because the residual value is overemphasized due to the squareness. For the correction of this weakness, you can just use sum of absolute value, called as MAE metric. In other words, instead of linear regression, one can adjust it into robust regression method.
To apply the weighted hybrids to more generalized prediction, we also can add on a regularization term or put constraints on such as non-negativity or summation to .
Homogeneous data type and model classes
Different models are applied on the same data, and results are aggregated into a single predicted value. This approach is robust because it avoids each bias which each model has. Same model with different parameter can be also used.
Heterogeneous data type and model classes
One component of the model might be a collaborative recommender using a rating matrix, whereas another component of the model might be a content-based recommender.
For modification of bagging into collaborative filtering, we face two challenges as follows.
No clear demarcation between training and test rows.
No clear distinction between dependent and independent columns.
To execute bagging, training sets are created with bootstrapped sampling. Bootstrapping is a way to sample with replacement in order to create a new training data. For example, if we have a data [9, 7, 5, 3, 1], we can conclude that the sample mean is 5. However, we don't know the true value of a mean. By sampling from this data with replacement, we can make a several data set like this: [9, 7, 7, 1, 1], [5, 7, 3, 3,1], [9, 7, 5, 3, 1]. By calculating sample mean from each data list, we can predict the variance of a sample mean and also get a confidence interval. It helps for estimation when we don't know the distribution from which the data comes.
The expected fraction of rows which is not represented in a given bootstrapped sample is given by . In other words, each bootstrapped sample can contain original rows with about 2/3 probability. . Bagging is a way to average prediction from models for a given test data. It reduces variance of our model.
Initialization : missing entries are initialized with row averages, column averages, or something used simple cf algorithm.
Update: Missing entries of each columns are set as the target variable and remaining columns as the feature variables. When the target variable is observed, the row is treated as a train set. The only thing to do is apply algorithm for this matrix.
Update is iteratively repeated to convergence.
Row-wise bootstrapping: The rows of are sampled with replacement to create new ratings matrices . Then an existing cf filtering(e.g. latent factor model) can be applied for prediction. Final predicted rating is the average rating of that item over the duplicate occurrences of one user. Each user will be at least in one ensemble component with with large .
Row-wise subsampling: It is similar to row-wise bootstrapping, except that the rows are sampled without replacement. The fraction of rows sampled is chosen randomly from . should be grater than to ensure that all rows are represented. Because it is difficult to predict all entries in this setting, so some entries are averaged over only a small number of samples, leading not to good variance reduction
Entry-wise bagging
Entry-wise subsampling
Randomness Injection
Injection into a neighborhood model: Instead of using top nearest neighbors, one can use top neighbors with .
Injection into a matrix factorization model:
In the context of a model selection, switching hybrids are mostly used in recommender system. The basic idea is that some model works better in earlier stages but other models better in later stages.
Bucket-of-Models
A fraction(e.g. 25% to 33%) of the specified entries are held out, and various models are applied to the resulting matrix. The held out set is used for validation set to evaluate the metric like MSE or MAE.
Cascade Hybrids are the way to refine the recommendations made by the previous recommender.
First recommender provided a rough ranking and might eliminate many potential items.
Second recommender uses this rough ranking to refine it and break ties.
Resulting recommender is presented to the user.
The weight associated with entry of the rating matrix is dented by .
is different from by at least a predefined amount -> incorrect
is the fraction of specified ratings in where predicted value is incorrect.
For correctly predicted value, the weight becomes reduced. = But, for incorrectly predicted value, the weight stays unchanged.
The weights are always normalized to sum to in each iteration.
Neighborhood-based algorithm
Latent factor models
The weighted sum of squares of the errors have to be minimized.
Generate some recommendation like "related authors" and "related titles" for items by using collaborative recommender system. Then, content-based recommender can be used in leveraging these features.
Use content-based recommender to fulfill the missing entries so that it is no longer sparse. Then, a collaborative recommender is used on this dense rating matrix. The final prediction is the weighted sum of each prediction from content-based way and collaborative way.
This is the way to integrate various data sources into a unified representation. In this case, the objective function becomes like this:
For example, let be an implicit feedback rating matrix, and be a content matrix, in which each item is described by non-negative frequencies of words. is an item-item coefficient matrix s.t. .