Support Vector Machine
Pros & Cons
support vector machine
Advantages
1- Thrives in High Dimension
When data has high dimension (think 1000+ to infinity features) a Support Vector Machine with the right settings (right kernel choice etc.) can be the way to go and produce really accurate results.
2- Kernel Flexibility
If you're a hands-on person who likes to learn and understand the intricacies of systems then you might actually enjoy the highly integrated kernel-world of Support Vector Machines.
Support Vector Machines are all about choosing the right kernel with the right parameters and this can provide lots of flexibility
and a potent toolset.
Linear kernels, non-linear kernels, polynomial kernels, RBF, sigmoid and gaussian kernels all have an edge in solving supervised machine learning problems with SVMs or SVRs.
3- Fast Prediction
Support Vector Machines may be relatively sluggish when it comes to training especially with large datasets, however, when it comes to prediction they are quite fast.
SVMs handle dataset as a whole at once and in this sense it's not an incremental approach. This means whole data is taken to RAM of a computer during training. Once it's done prediction becomes a breeze.
4- Both Classification and Regression Skills
SVMs can be used to both classify data and also predict continuous numerical values. Regression variance of Support Vector Machines are usually called SVR (Support Vector Regression)
Holy Python is reader-supported. When you buy through links on our site, we may earn an affiliate commission.
support vector machine
Disadvantages
1- Advanced Settings
Although random forests have numerous optimization parameters too it's not so easy to make huge mistakes with them, but when it comes to Support Vector Machines, correct parameters can define the line between misery and victory.
This makes Support Vector Machines difficult to implement sometimes.
2- Suitable for Small Dataset
Support Vector Machines don't have a scalable nature and they don't work that well with mid-size or large datasets.
3- Costly Computation
SVMs are not the most efficient algorithms and it can be quite costly computationally to train them. (When applied with kernels and especially with non-linear kernels)
4- Feature Vectors Required
You can't just work on any problems with SVM Machine Learning Algorithms.
Dataset in hand will already need have feature vectors or you will need to pre-process to extract feature vectors which might not always be easy or possible.
5- Low Interpretability
Support Vector Machines don't provide very sophisticated and interpretable reports that can be interpreted in an easy fashion.
Lack of probability estimates also is another drawback of this machine learning algorithm.
6- Overfitting Risk
Overfitting is another potential side effect of Support Vector Machines and it can be quite difficult to detect or fix at times.
7- Scaling Neccessity
This is not necessarily a con but something that comes with Support Vector Machines and creates additional tasks and maybe you'd rather not deal with extra data preparation techniques.
Scaling is an important fundamental step when working with SVMs otherwise features with higher nominal values will dominate the decision-making process while calculating the distance between the plane and support vectors.
wrap-up
Support Vector Machine Pros & Cons Summary
Why Support Vector Machines?
However, in so many aspects Support Vector Machines face serious competition.
Concerning supervised machine learning space, SVMs can deal with linear as well as non-linear problems, they can also do classification as well as regression. Although accuracy wise they perform quite well, they don't produce very interpretable results.
Although they are good at handling large numbers of features they struggle with computation resources when data gets big.
It's probably safe to conclude that Support Vector Machines happily live in a specific but limited zone where data is considerably small (think up to 20.000 rows) and features are plenty. And, don't forget to really optimize the hyper-parameters and their parameters!
Easy Interpretation
This gotta be the biggest edge of Decision Trees. They just give easy, readable outputs.
Easy Data Prep
Data prep is easy with Decision Trees on so many levels. (missing data is ok,no normalization, no scaling etc.)
Computation Cost
Compared to some algorithms such as random forests, decision trees are a lighter alternative.
Parameter Complexity
Decision Trees come with a learning curve especially if you want to get hands-on with them. There are lots of important parameters that can make a big difference.
Relatively Slow
Despite its edge on Random Forests, Decision Trees are computationally expensive in general.
Limited Power
Decision tree can be limited in its accuracy and tackling complex data.