SVM核函数 淡淡的烟草味﹌ 2022-06-18 01:41 193阅读 0赞 ### Kernel Functions ### Below is a list of some kernel functions available from the existing literature. As was the case with previous articles, every [LaTeX notation][] for the formulas below are readily available from their [ alternate text html tag][alternate text html tag]. I can not guarantee all of them are perfectly correct, thus use them at your own risk. Most of them have links to articles where they have been originally used or proposed. ### 1. Linear Kernel ### The Linear kernel is the simplest kernel function. It is given by the inner product <x,y> plus an optional constant c. Kernel algorithms using a linear kernel are often equivalent to their non-kernel counterparts, i.e. [ KPCA][KPCA] with linear kernel is the same as [ standard PCA][standard PCA]. ![k(x, y) = x^T y + c][k_x_ y_ _ x_T y _ c] ### 2. Polynomial Kernel ### The Polynomial kernel is a non-stationary kernel. Polynomial kernels are well suited for problems where all the training data is normalized. ![k(x, y) = (\\alpha x^T y + c)^d][k_x_ y_ _ _alpha x_T y _ c_d] Adjustable parameters are the slope alpha, the constant term c and the polynomial degree d. ### 3. Gaussian Kernel ### The Gaussian kernel is an example of radial basis function kernel. ![k(x, y) = \\exp\\left(-\\frac\{ \\lVert x-y \\rVert ^2\}\{2\\sigma^2\}\\right)][k_x_ y_ _ _exp_left_-_frac_ _lVert x-y _rVert _2_2_sigma_2_right] Alternatively, it could also be implemented using ![k(x, y) = \\exp\\left(- \\gamma \\lVert x-y \\rVert ^2 )][k_x_ y_ _ _exp_left_- _gamma _lVert x-y _rVert _2] The adjustable parameter sigma plays a major role in the performance of the kernel, and should be carefully tuned to the problem at hand. If overestimated, the exponential will behave almost linearly and the higher-dimensional projection will start to lose its non-linear power. In the other hand, if underestimated, the function will lack regularization and the decision boundary will be highly sensitive to noise in training data. ### 4. Exponential Kernel ### The exponential kernel is closely related to the Gaussian kernel, with only the square of the norm left out. It is also a radial basis function kernel. ![k(x, y) = \\exp\\left(-\\frac\{ \\lVert x-y \\rVert \}\{2\\sigma^2\}\\right)][k_x_ y_ _ _exp_left_-_frac_ _lVert x-y _rVert _2_sigma_2_right] ### 5. Laplacian Kernel ### The Laplace Kernel is completely equivalent to the exponential kernel, except for being less sensitive for changes in the sigma parameter. Being equivalent, it is also a radial basis function kernel. ![k(x, y) = \\exp\\left(- \\frac\{\\lVert x-y \\rVert \}\{\\sigma\}\\right)][k_x_ y_ _ _exp_left_- _frac_lVert x-y _rVert _sigma_right] It is important to note that the observations made about the sigma parameter for the Gaussian kernel also apply to the Exponential and Laplacian kernels. ### 6. ANOVA Kernel ### The ANOVA kernel is also a radial basis function kernel, just as the Gaussian and Laplacian kernels. It is said to [ perform well in multidimensional regression problems][perform well in multidimensional regression problems] (Hofmann, 2008). ![k(x, y) = \\sum\_\{k=1\}^n \\exp (-\\sigma (x^k - y^k)^2)^d][k_x_ y_ _ _sum_k_1_n _exp _-_sigma _x_k - y_k_2_d] ### 7. Hyperbolic Tangent (Sigmoid) Kernel ### The Hyperbolic Tangent Kernel is also known as the Sigmoid Kernel and as the Multilayer Perceptron (MLP) kernel. The Sigmoid Kernel comes from the [Neural Networks][] field, where the bipolar sigmoid function is often used as an [activation function][] for artificial neurons. ![k(x, y) = \\tanh (\\alpha x^T y + c)][k_x_ y_ _ _tanh _alpha x_T y _ c] It is interesting to note that a SVM model using a sigmoid kernel function is equivalent to a two-layer, perceptron neural network. This kernel was quite popular for support vector machines due to its origin from neural network theory. Also, despite being only conditionally positive definite, it has been found to [ perform well in practice][perform well in practice]. There are two adjustable parameters in the sigmoid kernel, the slope alpha and the intercept constant c. A common value for alpha is 1/N, where N is the data dimension. A more detailed study on sigmoid kernels can be found in the [works by Hsuan-Tien and Chih-Jen][]. ### 8. Rational Quadratic Kernel ### The Rational Quadratic kernel is less computationally intensive than the Gaussian kernel and can be used as an alternative when using the Gaussian becomes too expensive. ![k(x, y) = 1 - \\frac\{\\lVert x-y \\rVert^2\}\{\\lVert x-y \\rVert^2 + c\}][k_x_ y_ _ 1 - _frac_lVert x-y _rVert_2_lVert x-y _rVert_2 _ c] ### 9. Multiquadric Kernel ### The Multiquadric kernel can be used in the same situations as the Rational Quadratic kernel. As is the case with the Sigmoid kernel, it is also an example of an non-positive definite kernel. ![k(x, y) = \\sqrt\{\\lVert x-y \\rVert^2 + c^2\}][k_x_ y_ _ _sqrt_lVert x-y _rVert_2 _ c_2] ### 10. Inverse Multiquadric Kernel ### The Inverse Multi Quadric kernel. As with the Gaussian kernel, it results in a kernel matrix with full rank ([Micchelli, 1986][Micchelli_ 1986]) and thus forms a infinite dimension feature space. ![k(x, y) = \\frac\{1\}\{\\sqrt\{\\lVert x-y \\rVert^2 + \\theta^2\}\}][k_x_ y_ _ _frac_1_sqrt_lVert x-y _rVert_2 _ _theta_2] ### 11. Circular Kernel ### The circular kernel comes from a statistics perspective. It is an example of an isotropic stationary kernel and is positive definite in R2. ![k(x, y) = \\frac\{2\}\{\\pi\} \\arccos ( - \\frac\{ \\lVert x-y \\rVert\}\{\\sigma\}) - \\frac\{2\}\{\\pi\} \\frac\{ \\lVert x-y \\rVert\}\{\\sigma\} \\sqrt\{1 - \\left(\\frac\{ \\lVert x-y \\rVert^2\}\{\\sigma\} \\right)\}][k_x_ y_ _ _frac_2_pi_ _arccos _ - _frac_ _lVert x-y _rVert_sigma_ - _frac_2_pi_ _frac_ _lVert x-y _rVert_sigma_ _sqrt_1 - _left_frac_ _lVert x-y _rVert_2_sigma_ _right] ![\\mbox\{if\}~ \\lVert x-y \\rVert < \\sigma \\mbox\{, zero otherwise\}][mbox_if_ _lVert x-y _rVert _ _sigma _mbox_ zero otherwise] ### 12. Spherical Kernel ### The spherical kernel is similar to the circular kernel, but is positive definite in R3. ![k(x, y) = 1 - \\frac\{3\}\{2\} \\frac\{\\lVert x-y \\rVert\}\{\\sigma\} + \\frac\{1\}\{2\} \\left( \\frac\{ \\lVert x-y \\rVert\}\{\\sigma\} \\right)^3][k_x_ y_ _ 1 - _frac_3_2_ _frac_lVert x-y _rVert_sigma_ _ _frac_1_2_ _left_ _frac_ _lVert x-y _rVert_sigma_ _right_3] ![\\mbox\{if\}~ \\lVert x-y \\rVert < \\sigma \\mbox\{, zero otherwise\}][mbox_if_ _lVert x-y _rVert _ _sigma _mbox_ zero otherwise] ### 13. Wave Kernel ### The Wave kernel is also [ symmetric positive semi-definite][symmetric positive semi-definite] ([Huang, 2008][symmetric positive semi-definite]). ![k(x, y) = \\frac\{\\theta\}\{\\lVert x-y \\rVert \\right\} \\sin \\frac\{\\lVert x-y \\rVert \}\{\\theta\}][k_x_ y_ _ _frac_theta_lVert x-y _rVert _right_ _sin _frac_lVert x-y _rVert _theta] ### 14. Power Kernel ### The Power kernel is also known as the (unrectified) triangular kernel. It is an example of scale-invariant kernel ([Sahbi and Fleuret, 2004][Sahbi and Fleuret_ 2004]) and is also only conditionally positive definite. ![k(x,y) = - \\lVert x-y \\rVert ^d][k_x_y_ _ - _lVert x-y _rVert _d] ### 15. Log Kernel ### The Log kernel seems to be particularly interesting for images, but is only conditionally positive definite. ![k(x,y) = - log (\\lVert x-y \\rVert ^d + 1)][k_x_y_ _ - log _lVert x-y _rVert _d _ 1] ### 16. Spline Kernel ### The [Spline][] kernel is given as a piece-wise cubic polynomial, as [derived in the works by Gunn (1998)][derived in the works by Gunn _1998]. ![k(x, y) = 1 + xy + xy~min(x,y) - \\frac\{x+y\}\{2\}~min(x,y)^2+\\frac\{1\}\{3\}\\min(x,y)^3][k_x_ y_ _ 1 _ xy _ xy_min_x_y_ - _frac_x_y_2_min_x_y_2_frac_1_3_min_x_y_3] However, what it actually mean is: ![k(x,y) = \\prod\_\{i=1\}^d 1 + x\_i y\_i + x\_i y\_i \\min(x\_i, y\_i) - \\frac\{x\_i + y\_i\}\{2\} \\min(x\_i,y\_i)^2 + \\frac\{\\min(x\_i,y\_i)^3\}\{3\}][k_x_y_ _ _prod_i_1_d 1 _ x_i y_i _ x_i y_i _min_x_i_ y_i_ - _frac_x_i _ y_i_2_ _min_x_i_y_i_2 _ _frac_min_x_i_y_i_3_3] With![x,y \\in R^d][x_y _in R_d] ### 17. B-Spline (Radial Basis Function) Kernel ### The B-Spline kernel is defined on the interval \[−1, 1\]. It is given by the recursive formula: ![k(x,y) = B\_\{2p+1\}(x-y)][k_x_y_ _ B_2p_1_x-y] ![\\mbox\{where~\} p \\in N \\mbox\{~with~\} B\_\{i+1\} := B\_i \\otimes B\_0.][mbox_where_ p _in N _mbox_with_ B_i_1_ _ B_i _otimes B_0.] In the [ work by Bart Hamers][work by Bart Hamers] it is given by:![k(x, y) = \\prod\_\{p=1\}^d B\_\{2n+1\}(x\_p - y\_p)][k_x_ y_ _ _prod_p_1_d B_2n_1_x_p - y_p] Alternatively, Bn can be computed using the explicit expression ([Fomel, 2000][Fomel_ 2000]): ![B\_n(x) = \\frac\{1\}\{n!\} \\sum\_\{k=0\}^\{n+1\} \\binom\{n+1\}\{k\} (-1)^k (x + \\frac\{n+1\}\{2\} - k)^n\_+][B_n_x_ _ _frac_1_n_ _sum_k_0_n_1_ _binom_n_1_k_ _-1_k _x _ _frac_n_1_2_ - k_n] Where x\+ is defined as the [ truncated power function][truncated power function]: ![x^d\_+ = \\begin\{cases\} x^d, & \\mbox\{if \}x > 0 \\\\ 0, & \\mbox\{otherwise\} \\end\{cases\}][x_d_ _ _begin_cases_ x_d_ _ _mbox_if _x _ 0 _ 0_ _ _mbox_otherwise_ _end_cases] ### 18. Bessel Kernel ### The [Bessel][] kernel is well known in the theory of function spaces of fractional smoothness. It is given by: ![k(x, y) = \\frac\{J\_\{v+1\}( \\sigma \\lVert x-y \\rVert)\}\{ \\lVert x-y \\rVert ^ \{-n(v+1)\} \}][k_x_ y_ _ _frac_J_v_1_ _sigma _lVert x-y _rVert_ _lVert x-y _rVert _ _-n_v_1_] where J is the [ Bessel function of first kind][Bessel function of first kind]. However, in the [ Kernlab for R documentation][Kernlab for R documentation], the Bessel kernel is said to be: ![k(x,x') = - Bessel\_\{(nu+1)\}^n (\\sigma |x - x'|^2)][k_x_x_ _ - Bessel_nu_1_n _sigma _x - x_2] ### 19. Cauchy Kernel ### The Cauchy kernel comes from the [ Cauchy distribution][Cauchy distribution] ([Basak, 2008][Basak_ 2008]). It is a long-tailed kernel and can be used to give long-range influence and sensitivity over the high dimension space. ![k(x, y) = \\frac\{1\}\{1 + \\frac\{\\lVert x-y \\rVert^2\}\{\\sigma\} \}][k_x_ y_ _ _frac_1_1 _ _frac_lVert x-y _rVert_2_sigma_] ### 20. Chi-Square Kernel ### The Chi-Square kernel comes from the [ Chi-Square distribution][Chi-Square distribution]. ![k(x,y) = 1 - \\sum\_\{i=1\}^n \\frac\{(x\_i-y\_i)^2\}\{\\frac\{1\}\{2\}(x\_i+y\_i)\}][k_x_y_ _ 1 - _sum_i_1_n _frac_x_i-y_i_2_frac_1_2_x_i_y_i] ### 21. Histogram Intersection Kernel ### The Histogram Intersection Kernel is also known as the Min Kernel and has been proven useful in image classification. ![k(x,y) = \\sum\_\{i=1\}^n \\min(x\_i,y\_i)][k_x_y_ _ _sum_i_1_n _min_x_i_y_i] ### 22. Generalized Histogram Intersection ### The Generalized Histogram Intersection kernel is built based on the [ Histogram Intersection Kernel ][Histogram Intersection Kernel]for image classification but applies in a much larger variety of contexts ([Boughorbel, 2005][Boughorbel_ 2005]). It is given by: ![k(x,y) = \\sum\_\{i=1\}^m \\min(|x\_i|^\\alpha,|y\_i|^\\beta)][k_x_y_ _ _sum_i_1_m _min_x_i_alpha_y_i_beta] ### 23. Generalized T-Student Kernel ### The Generalized T-Student Kernel [ has been proven to be a Mercel Kernel][has been proven to be a Mercel Kernel], thus having a positive semi-definite Kernel matrix ([Boughorbel, 2004][has been proven to be a Mercel Kernel]). It is given by: ![k(x,y) = \\frac\{1\}\{1 + \\lVert x-y \\rVert ^d\}][k_x_y_ _ _frac_1_1 _ _lVert x-y _rVert _d] ### 24. Bayesian Kernel ### The Bayesian kernel could be given as: ![k(x,y) = \\prod\_\{l=1\}^N \\kappa\_l (x\_l,y\_l)][k_x_y_ _ _prod_l_1_N _kappa_l _x_l_y_l] where ![\\kappa\_l(a,b) = \\sum\_\{c \\in \\\{0;1\\\}\} P(Y=c \\mid X\_l=a) ~ P(Y=c \\mid X\_l=b)][kappa_l_a_b_ _ _sum_c _in _0_1_ P_Y_c _mid X_l_a_ _ P_Y_c _mid X_l_b] However, it really depends on the problem being modeled. For more information, please [see the work by Alashwal, Deris and Othman][see the work by Alashwal_ Deris and Othman], in which they used a SVM with Bayesian kernels in the prediction of protein-protein interactions. ### 25. Wavelet Kernel ### The Wavelet kernel ([Zhang et al, 2004][Zhang et al_ 2004]) comes from [Wavelet theory][] and is given as: ![k(x,y) = \\prod\_\{i=1\}^N h(\\frac\{x\_i-c\_i\}\{a\}) \\: h(\\frac\{y\_i-c\_i\}\{a\})][k_x_y_ _ _prod_i_1_N h_frac_x_i-c_i_a_ _ h_frac_y_i-c_i_a] Where a and c are the wavelet dilation and translation coefficients, respectively (the form presented above is a simplification, please see the original paper for details). A translation-invariant version of this kernel can be given as: ![k(x,y) = \\prod\_\{i=1\}^N h(\\frac\{x\_i-y\_i\}\{a\})][k_x_y_ _ _prod_i_1_N h_frac_x_i-y_i_a] Where in both h(x) denotes a mother wavelet function. In the paper by Li Zhang, Weida Zhou, and Licheng Jiao, the authors suggests a possible h(x) as: ![h(x) = cos(1.75x)exp(-\\frac\{x^2\}\{2\})][h_x_ _ cos_1.75x_exp_-_frac_x_2_2] Which they also prove as an admissible kernel function. 本文章转载自:http://www.shamoxia.com/html/y2010/2292.html [LaTeX notation]: http://pt.wikipedia.org/wiki/LaTeX [alternate text html tag]: http://en.wikipedia.org/wiki/Alt_attribute [KPCA]: http://crsouza.blogspot.com/2010/01/kernel-principal-component-analysis-in.html [standard PCA]: http://crsouza.blogspot.com/2009/09/principal-component-analysis-in-c.html [k_x_ y_ _ x_T y _ c]: http://latex.codecogs.com/gif.latex?k%28x,%20y%29%20=%20x%5ET%20y%20+%20c [k_x_ y_ _ _alpha x_T y _ c_d]: http://latex.codecogs.com/gif.latex?k%28x,%20y%29%20=%20%28%5Calpha%20x%5ET%20y%20+%20c%29%5Ed [k_x_ y_ _ _exp_left_-_frac_ _lVert x-y _rVert _2_2_sigma_2_right]: /images/20220618/3a6b6bca8bb3482fbde54614c4cca80a.png [k_x_ y_ _ _exp_left_- _gamma _lVert x-y _rVert _2]: /images/20220618/5b9a3ab5db4a43db8b2ece487445a3cf.png [k_x_ y_ _ _exp_left_-_frac_ _lVert x-y _rVert _2_sigma_2_right]: /images/20220618/4f397b6a5df6495c8cc38fd109856909.png [k_x_ y_ _ _exp_left_- _frac_lVert x-y _rVert _sigma_right]: /images/20220618/b0bab3b0ee204613b9398d8b56dfcf81.png [perform well in multidimensional regression problems]: http://www.nicta.com.au/research/research_publications?sq_content_src=%2BdXJsPWh0dHBzJTNBJTJGJTJGcHVibGljYXRpb25zLmluc2lkZS5uaWN0YS5jb20uYXUlMkZzZWFyY2glMkZmdWxsdGV4dCUzRmlkJTNEMjYxJmFsbD0x [k_x_ y_ _ _sum_k_1_n _exp _-_sigma _x_k - y_k_2_d]: /images/20220618/5bd2b4480aec4a8cab72ce6ae074ec7e.png [Neural Networks]: http://en.wikipedia.org/wiki/Neural_network [activation function]: http://en.wikipedia.org/wiki/Activation_function [k_x_ y_ _ _tanh _alpha x_T y _ c]: http://latex.codecogs.com/png.latex?k%28x,%20y%29%20=%20%5Ctanh%20%28%5Calpha%20x%5ET%20y%20+%20c%29 [perform well in practice]: http://perso.lcpc.fr/tarel.jean-philippe/publis/jpt-icme05.pdf [works by Hsuan-Tien and Chih-Jen]: http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf [k_x_ y_ _ 1 - _frac_lVert x-y _rVert_2_lVert x-y _rVert_2 _ c]: http://latex.codecogs.com/png.latex?k%28x,%20y%29%20=%201%20-%20%5Cfrac%7B%5ClVert%20x-y%20%5CrVert%5E2%7D%7B%5ClVert%20x-y%20%5CrVert%5E2%20+%20c%7D [k_x_ y_ _ _sqrt_lVert x-y _rVert_2 _ c_2]: http://latex.codecogs.com/png.latex?k%28x,%20y%29%20=%20%5Csqrt%7B%5ClVert%20x-y%20%5CrVert%5E2%20+%20c%5E2%7D [Micchelli_ 1986]: http://www.springerlink.com/content/w62233k766460945/ [k_x_ y_ _ _frac_1_sqrt_lVert x-y _rVert_2 _ _theta_2]: http://latex.codecogs.com/png.latex?k%28x,%20y%29%20=%20%5Cfrac%7B1%7D%7B%5Csqrt%7B%5ClVert%20x-y%20%5CrVert%5E2%20+%20c%5E2%7D%7D [k_x_ y_ _ _frac_2_pi_ _arccos _ - _frac_ _lVert x-y _rVert_sigma_ - _frac_2_pi_ _frac_ _lVert x-y _rVert_sigma_ _sqrt_1 - _left_frac_ _lVert x-y _rVert_2_sigma_ _right]: /images/20220618/bebff4eadc5c47c3a4349ce5a6726863.png [mbox_if_ _lVert x-y _rVert _ _sigma _mbox_ zero otherwise]: /images/20220618/80669da4369042c897c203f34283573c.png [k_x_ y_ _ 1 - _frac_3_2_ _frac_lVert x-y _rVert_sigma_ _ _frac_1_2_ _left_ _frac_ _lVert x-y _rVert_sigma_ _right_3]: http://latex.codecogs.com/png.latex?k%28x,%20y%29%20=%201%20-%20%5Cfrac%7B3%7D%7B2%7D%20%5Cfrac%7B%5ClVert%20x-y%20%5CrVert%7D%7B%5Csigma%7D%20+%20%5Cfrac%7B1%7D%7B2%7D%20%5Cleft%28%20%5Cfrac%7B%20%5ClVert%20x-y%20%5CrVert%7D%7B%5Csigma%7D%20%5Cright%29%5E3 [symmetric positive semi-definite]: http://www.lib.ncsu.edu/theses/available/etd-02262008-213801/unrestricted/etd.pdf [k_x_ y_ _ _frac_theta_lVert x-y _rVert _right_ _sin _frac_lVert x-y _rVert _theta]: /images/20220618/9ddcff720c824eb5a88e6ccd6a9014ec.png [Sahbi and Fleuret_ 2004]: http://hal.archives-ouvertes.fr/docs/00/07/19/84/PDF/RR-4601.pdf [k_x_y_ _ - _lVert x-y _rVert _d]: /images/20220618/b551741b826749778e4811a217a406ba.png [k_x_y_ _ - log _lVert x-y _rVert _d _ 1]: http://latex.codecogs.com/png.latex?k%28x,y%29%20=%20-%20log%20%28%5ClVert%20x-y%20%5CrVert%20%5Ed%20+%201%29 [Spline]: http://en.wikipedia.org/wiki/Spline_%28mathematics%29 [derived in the works by Gunn _1998]: http://www.svms.org/tutorials/Gunn1998.pdf [k_x_ y_ _ 1 _ xy _ xy_min_x_y_ - _frac_x_y_2_min_x_y_2_frac_1_3_min_x_y_3]: http://latex.codecogs.com/png.latex?k%28x,%20y%29%20=%201%20+%20xy%20+%20xy%7Emin%28x,y%29%20-%20%5Cfrac%7Bx+y%7D%7B2%7D%7Emin%28x,y%29%5E2+%5Cfrac%7B1%7D%7B3%7D%5Cmin%28x,y%29%5E3 [k_x_y_ _ _prod_i_1_d 1 _ x_i y_i _ x_i y_i _min_x_i_ y_i_ - _frac_x_i _ y_i_2_ _min_x_i_y_i_2 _ _frac_min_x_i_y_i_3_3]: http://latex.codecogs.com/png.latex?k%28x,y%29%20=%20%5Cprod_%7Bi=1%7D%5Ed%201%20+%20x_i%20y_i%20+%20x_i%20y_i%20%5Cmin%28x_i,%20y_i%29%20-%20%5Cfrac%7Bx_i%20+%20y_i%7D%7B2%7D%20%5Cmin%28x_i,y_i%29%5E2%20+%20%5Cfrac%7B%5Cmin%28x_i,y_i%29%5E3%7D%7B3%7D [x_y _in R_d]: /images/20220618/181f8ae7edff485cbbd7663e1bd79ab4.png [k_x_y_ _ B_2p_1_x-y]: http://latex.codecogs.com/png.latex?k%28x,y%29%20=%20B_%7B2p+1%7D%28x-y%29 [mbox_where_ p _in N _mbox_with_ B_i_1_ _ B_i _otimes B_0.]: http://latex.codecogs.com/png.latex?%5Cmbox%7Bwhere%7E%7D%20p%20%5Cin%20N%20%5Cmbox%7B%7Ewith%7E%7D%20B_%7Bi+1%7D%20:=%20B_i%20%5Cotimes%20B_0. [work by Bart Hamers]: ftp://ftp.esat.kuleuven.ac.be/pub/SISTA/hamers/PhD_bhamers.pdf [k_x_ y_ _ _prod_p_1_d B_2n_1_x_p - y_p]: http://latex.codecogs.com/png.latex?k%28x,%20y%29%20=%20%5Cprod_%7Bp=1%7D%5Ed%20B_%7B2n+1%7D%28x_p%20-%20y_p%29 [Fomel_ 2000]: http://sepwww.stanford.edu/public/docs/sep105/sergey2/paper_html/node5.html [B_n_x_ _ _frac_1_n_ _sum_k_0_n_1_ _binom_n_1_k_ _-1_k _x _ _frac_n_1_2_ - k_n]: http://latex.codecogs.com/png.latex?B_n%28x%29%20=%20%5Cfrac%7B1%7D%7Bn%21%7D%20%5Csum_%7Bk=0%7D%5E%7Bn+1%7D%20%5Cbinom%7Bn+1%7D%7Bk%7D%20%28-1%29%5Ek%20%28x%20+%20%5Cfrac%7Bn+1%7D%7B2%7D%20-%20k%29%5En_+ [truncated power function]: http://en.wikipedia.org/wiki/Truncated_power_function [x_d_ _ _begin_cases_ x_d_ _ _mbox_if _x _ 0 _ 0_ _ _mbox_otherwise_ _end_cases]: http://latex.codecogs.com/png.latex?x%5Ed_+%20=%20%5Cbegin%7Bcases%7D%20x%5Ed,%20&%20%5Cmbox%7Bif%20%7Dx%20%3E%200%20%5C%5C%200,%20&%20%5Cmbox%7Botherwise%7D%20%5Cend%7Bcases%7D [Bessel]: http://en.wikipedia.org/wiki/Bessel_function [k_x_ y_ _ _frac_J_v_1_ _sigma _lVert x-y _rVert_ _lVert x-y _rVert _ _-n_v_1_]: http://latex.codecogs.com/png.latex?k%28x,%20y%29%20=%20%5Cfrac%7BJ_%7Bv+1%7D%28%20%5Csigma%20%5ClVert%20x-y%20%5CrVert%29%7D%7B%20%5ClVert%20x-y%20%5CrVert%20%5E%20%7B-n%28v+1%29%7D%20%7D [Bessel function of first kind]: http://en.wikipedia.org/wiki/Bessel_function#Bessel_functions_of_the_first_kind_:_J.CE.B1 [Kernlab for R documentation]: http://rss.acs.unt.edu/Rdoc/library/kernlab/html/dots.html [k_x_x_ _ - Bessel_nu_1_n _sigma _x - x_2]: http://latex.codecogs.com/png.latex?k%28x,x%27%29%20=%20-%20Bessel_%7B%28nu+1%29%7D%5En%20%28%5Csigma%20%7Cx%20-%20x%27%7C%5E2%29 [Cauchy distribution]: http://en.wikipedia.org/wiki/Cauchy_distribution [Basak_ 2008]: http://figment.cse.usf.edu/~sfefilat/data/papers/WeAT4.2.pdf [k_x_ y_ _ _frac_1_1 _ _frac_lVert x-y _rVert_2_sigma_]: http://latex.codecogs.com/png.latex?k%28x,%20y%29%20=%20%5Cfrac%7B1%7D%7B1%20+%20%5Cfrac%7B%5ClVert%20x-y%20%5CrVert%5E2%7D%7B%5Csigma%7D%20%7D [Chi-Square distribution]: http://en.wikipedia.org/wiki/Chi-square_distribution [k_x_y_ _ 1 - _sum_i_1_n _frac_x_i-y_i_2_frac_1_2_x_i_y_i]: http://latex.codecogs.com/png.latex?k%28x,y%29%20=%201%20-%20%5Csum_%7Bi=1%7D%5En%20%5Cfrac%7B%28x_i-y_i%29%5E2%7D%7B%5Cfrac%7B1%7D%7B2%7D%28x_i+y_i%29%7D [k_x_y_ _ _sum_i_1_n _min_x_i_y_i]: /images/20220618/7230a2ef5dc844d488a9ab6bd00cf5cb.png [Histogram Intersection Kernel]: http://www-video.eecs.berkeley.edu/Proceedings/ICIP2003/papers/cr1967.pdf [Boughorbel_ 2005]: http://perso.lcpc.fr/tarel.jean-philippe/publis/jpt-icip05.pdf [k_x_y_ _ _sum_i_1_m _min_x_i_alpha_y_i_beta]: /images/20220618/7d1800de173c42ceb438bed35d51c950.png [has been proven to be a Mercel Kernel]: http://ralyx.inria.fr/2004/Raweb/imedia/uid84.html [k_x_y_ _ _frac_1_1 _ _lVert x-y _rVert _d]: http://latex.codecogs.com/png.latex?k%28x,y%29%20=%20%5Cfrac%7B1%7D%7B1%20+%20%5ClVert%20x-y%20%5CrVert%20%5Ed%7D [k_x_y_ _ _prod_l_1_N _kappa_l _x_l_y_l]: /images/20220618/0e5e1e848d84455ba4cebf60f75e5f75.png [kappa_l_a_b_ _ _sum_c _in _0_1_ P_Y_c _mid X_l_a_ _ P_Y_c _mid X_l_b]: /images/20220618/ab2e352adb7b4d9a88b84df0a95ab5b1.png [see the work by Alashwal_ Deris and Othman]: http://www.waset.org/journals/ijci/v5/v5-2-14.pdf [Zhang et al_ 2004]: http://see.xidian.edu.cn/faculty/zhangli/publications/WSVM.pdf [Wavelet theory]: http://en.wikipedia.org/wiki/Wavelet [k_x_y_ _ _prod_i_1_N h_frac_x_i-c_i_a_ _ h_frac_y_i-c_i_a]: /images/20220618/f0e2d5df4e3a4803be8ce6219192c567.png [k_x_y_ _ _prod_i_1_N h_frac_x_i-y_i_a]: /images/20220618/08c88f485a7246dfa39b1d7f425c731e.png [h_x_ _ cos_1.75x_exp_-_frac_x_2_2]: /images/20220618/d540485648c7499fa0cb9d65f4f488f0.png
还没有评论,来说两句吧...