Related Data Science Links
Learn Svm Data Science Tutorial, validate concepts with Svm Data Science MCQ Questions, and prepare interviews through Svm Data Science Interview Questions and Answers.
SVM
Classification
Margin-based
scikit-learn
Support Vector Machines (SVM)
Learn how SVM finds a decision boundary with maximum margin and how kernels allow non-linear classification, with simple Python examples.
What is SVM?
SVM is a powerful classifier that tries to find the hyperplane that best separates classes by maximizing the margin between them.
- Works well in high-dimensional spaces.
- Can use kernels (RBF, polynomial, etc.) to handle non-linear data.
Example: SVC with RBF Kernel
SVM Classification on Iris
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score, classification_report
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)
# Scale features for SVM
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
svm_clf = SVC(
kernel="rbf", # radial basis function kernel
C=1.0, # regularization strength
gamma="scale", # kernel coefficient
random_state=42
)
svm_clf.fit(X_train_scaled, y_train)
y_pred = svm_clf.predict(X_test_scaled)
print("Accuracy:", accuracy_score(y_test, y_pred))
print("\nReport:\n", classification_report(y_test, y_pred, target_names=iris.target_names))