Menu Content/Inhalt
TriloByte Home arrow QC-Expert
SVM-kernel transformations Print E-mail
Next >
The above described methods lead only to linear models (discrimination, regression) of type w•x and as such can hardly be too much useful. One of major achievements of SVM theory is implementation of transformation of l-dimensional sample space Rl spanned by x with a system of nonlinear functions φ(x), into a new, n-dimensional space Qn. Dimensionality n of φ(x), has generally no connection to dimensionality of data l, Typically, n > l and n can also be infinite. The linear SVM model is created in the new space. Since the relationship between R and Q is non-linear, the linear SVM models created in Q are nonlinear in R. This gives SVM models a remarkable flexibility. If transformations are defined using quadratic forms
SVM-kernel transformations
(where K is a kernel function), the optimization tasks can be formulated as a convex quadratic constrained optimization which can be effectively solved with use of Lagrange multipliers.

The most commonly used kernel functions are RBF type functions (Radial Base Functions) defined as
SVM-kernel transformations
Further often used transformations are:
SVM-kernel transformations
Parameters γ and d are set by the user, r is computed. The parameter γ is the steepness of the kernel. Higher values of γ give generally more detailed (often also overdetermined and less stable) models. With use of kernel transformations, highly non-linear models can be created to describe the data x. Stability and prediction capability can be diagnosed with some validation tool.

Last Updated ( 03.06.2013 )
 
Next >

Login

Seminars, Courses

Show All

Poll

What application is most suitable for you?
powered by www.trilobyte.cz