# Support Vector Machine Optimization

#### Support Vector Machine Optimization Parameters Explained

These are the most commonly adjusted parameters with Support Vector Machines. Let’s take a deeper look at what they are used for and how to change their values:

C: (default: 1.0) This is a very important parameter for Support Vector Machines and it signifies the regularization value (Negatively correlated with regularization strength).

Always a positive float should be assigned to C parameter.

kernel: (default: “rbf“) Signifies the kernel selection for SVM Machine Learning Algorithm.

“rbf”: A very popular kernel, which stands for “radial basis function” will be applied.

linear“: Liner kernel will be used.

poly“: Polynomial kernel will be used.

“sigmoid”: Sigmoid kernel will be used.

precomputed“: Pre-computed kernel will be used.

This parameter can also be a callable user-defined function. Kernel matrix will be based on an equal dimension matrix.

degree: (default: 3) If you choose “poly” as your Support Vector Machine kernel, this parameter will denote the degree of your polynomial. It’s irrelevant for other kernels.

gamma: (default: “scale“) When you choose “rbf”, “poly” or “sigmoid” as kernel gamma signifies the kernel coefficient.

scale“: 1 / (n_features * X.var()) will be the kernel coefficient

auto“: 1 / n_features will be the kernel coefficient

tol: (default: 0.001) This parameter signifies the tolerance for stopping and it takes float values.

cache_size: (default 200) This parameter can be used to manage thesize of the kernel ‘s cache. It’s unit refers to Megabytes.

## Examples:

clf = svm.SVC(kernel='linear')
clf = svm.SVC(C = 0.5)
clf = svm.SVC(kernel='sigmoid', gamma="auto")
clf = svm.SVC(tol = 0.0015)
clf = svm.SVC(cache_size = 1000)

# More parameters

#### More Support Vector Machine Optimization Parameters for fine tuning

Further on, these parameters can be used to fine tune Support Vector Machine Algorithms in Machine Learning applications.

• coef0,
• shrinking
• class_weight,
• verbose,
• max_iter
• decision_function_shape
•  break_ties
• random_state

### coef0

(default: 0.0)

This parameter only takes a float value and it signifies the coefficient in case poly or sigmoid kernels are used.

### verbose

(default: False)

This parameter makes it possible to enable verbosity which is information output that can be displayed during the algorithm execution.

### break_ties

(default: False)

False: First class in the tie will be returned.

True: This parameter will be used to break ties. Can be computationally expensive.

### shrinking

(default: True)

This parameter allows the usage of shrinking heuristic in support vector machines.

### max_iter

(default: -1)

This parameter creates a hard limit on solver iterations.

-1: No hard limit

int: Limit will be integer value.

### random_state

(default: None)

Provides options for random number generation (in data shuffling during probability estimations).

None: numpy's random number generator will be used.

int: Integer value will be used as seed.

RandomState: RandomState instance will be used as seed.

### class_weight

(default: None)

None: All classes will have the same weight of one.

"balanced": y values will be used to adjust weights automatically. (negatively correlated with class frequency in dataset.

dict: Class weight will be assigned based on the dictionary values.

### decision_function_shape

(default: "ovr")

This parameter is about the shape of decision function.

"ovr": Stands for "one versus rest". Shape will be: (n_samples, n_classes)

"ovo": Stands for "one versus one". Shape will be: (n_samples, n_classes * (n_classes - 1) / 2). Only used for multi-class strategy and not for binary.

Official Scikit Learn Documentation: sklearn.svm.SVC