🦆
Statduck
  • (The link was changed)
  • Resume
  • Projects
    • KBO Prediction
    • Building's Fire in Gimhae
    • Parking Demand
    • Profiling of Y&Z Investors
    • Atlanta Police Station
    • Meal kit location strategy
  • Machine Learning
    • Overview: Supervised Learning
      • Curse of dimensionality
      • Statistical Decision Theory
    • Linear method: Regression
      • Problem Definition
      • Residual Sum of Squares
      • Coefficients(Beta)
      • Orthogonalization
      • Variation Version
      • Some problems
    • Linear method: Classification
      • Linear Model
      • LDA & QDA
      • Generalized Discriminant Analysis
      • Calculation
      • RDA
      • Support Vector Machine
    • Basis Expansions & Regularization
      • Piecewise Polynomials and Splines
      • Natural Cubic Spline
      • Smoothing Splines
      • Smoother Matrices
      • 비모수 로지스틱 회귀
    • Kernel Smoothing
      • Kernel method
      • Local regression
      • Kernel Density Estimation
      • Local Likelihood
    • Optimization
      • Convex Optimization
      • Conditional Gradient
      • Novelty Detection
    • Model Assessment & Selection
      • Bias-Var decomposition
      • Optimism
      • The Effective Number of Parameters
      • CV and Boostrap
    • Ensemble
      • Boosting
    • Unsupervised Learning
      • Association Rule
      • Clustering & Nearest Neighbor
      • Self-Organized Map
      • K-medoids[PAM algorithm]
  • Bayes
    • Prior, Posterior, Sample
    • One Parameter Model
    • Two parameter model
    • Bayesian Regression
  • Python Study
    • Python OverView
    • Class
    • Algorithm
      • Algorithm Analysis
      • Recursion
      • Binary Search
      • Stack
      • DFS
  • Mathematical Stat
    • Probability distribution
    • Discrete Variable
    • Moment Generating Function
  • Recommender System
    • LightGCN
    • Collaborative Filtering
      • Neighborhood-Based method
      • A View of Regression
      • Graph Model for NB model
    • Model-Based CF
      • Latent Factor Models
      • Subproblem
      • Summary
    • Content based filtering
    • Ensemble-Based
  • Incomplete posting
    • D3-js[Data handling]
    • Ch6(editing)
    • Gibbs Sampling
    • Session 9(editing)
  • Causal inference
  • SQL
    • Introduction
    • Basic Statement
    • Join Statement
Powered by GitBook
On this page

Was this helpful?

  1. Machine Learning
  2. Linear method: Classification

RDA

PreviousCalculationNextSupport Vector Machine

Last updated 3 years ago

Was this helpful?

✏️ Definition

RDA(Regularized Discriminant Analysis) is the combinational model between LDA and QDA.

Σ^k(α)=αΣ^k+(1−α)Σ^,α∈[0,1]\hat{\Sigma}_k(\alpha)=\alpha\hat{\Sigma}_k+(1-\alpha)\hat{\Sigma}, \quad \alpha \in[0,1]Σ^k​(α)=αΣ^k​+(1−α)Σ^,α∈[0,1]

Vector internal division. This is most common in combinational model. Σ^\hat{\Sigma}Σ^is a pooled covariance matrix from LDA. If we replace Σ^\hat{\Sigma}Σ^as Σ^(γ)\hat{\Sigma}(\gamma)Σ^(γ), this also can be changed into Σ^(α,γ)\hat{\Sigma}(\alpha,\gamma)Σ^(α,γ). For generalization, parameter is just added.

Σ^(γ)=γΣ^+(1−γ)σ^2I,γ∈[0,1]\hat{\Sigma}(\gamma)=\gamma\hat{\Sigma}+(1-\gamma)\hat{\sigma}^2I, \quad \gamma \in[0,1]Σ^(γ)=γΣ^+(1−γ)σ^2I,γ∈[0,1]

σ^2I\hat{\sigma}^2Iσ^2I also can be changed into diag(Σ^),Σ^/p,...diag(\hat{\Sigma}), \hat{\Sigma}/p,...diag(Σ^),Σ^/p,...