Kernel method
Last updated
Was this helpful?
Last updated
Was this helpful?
Kernel method is to estimate the function using kernel. It estimates the function value by investigating near x values. The near point is defined by the distance, so it gives a weight by the distance.
KNN can be viewed as a kernel method because it calculates distance in choosing nearest k points.
is the set containing nearest k points of euclidean distance. In this case, this function is not continuous because the support isn't continuous. KNN finds nearest k points, but this is based on the number of points(k) so knn differs from a metric based way.
Metric
knn
Bias
constant
Inverse of local density
Var
Inverse of local density
constant
Tied values
Additional weights
Metric: Defining the distance -> containing the points
KNN: Defining the points
Nadaraya-Watson kernel weighted average with the Epanechnikov quadratic kernel.
We consider more values when lambda becomes bigger. In this case, distance is defined as a quadratic form and lambda has a role in scaling factor. This scaling factor determines the distance between solution in quadratic equation.
D(t): Density function
f: Distance function
D(t) is a density function, and the sum of D in support t has to be 1. So 3/4 is multiplied by D(t).
The interesting thing is that we doesn't want to get the density for one point, but the density for the distance between two points. The support of probability set function is a set, and the distance becomes set in this case.
When x = x0, the x maximizing density would be mode. When two points get closer, the density becomes bigger. It means that the closer point of x from fixed x0 has more density.
More generally, kernel can be defined as below:
We just can use distance function for kernel, why do we have to make a composite function by calling D function. When lambda is fixed, .