forked from abhisek247767/AI-ML-DL-Hacktoberfest2024-WB
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
c8cf45d
commit 6bad89d
Showing
1 changed file
with
24 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
Support Vector Machines (SVM) is a powerful supervised machine learning algorithm used for classification and regression tasks. | ||
It's primary goal is to find the optimal hyperplane that best separates data points into different classes or predicts a continuous output. Here's a breakdown of its key concepts: | ||
|
||
There are few key concepts which are important to understand in SVM. | ||
1) Hyperplane: In a two-dimensional space, a hyperplane is a straight line that separates two classes. In higher dimensions, it's a flat, linear subspace. | ||
2) Support Vectors: These are the data points closest to the hyperplane and play a crucial role in defining it. | ||
SVMs aim to maximize the margin, which is the distance between the hyperplane and the nearest support vectors. | ||
3) Margin: The margin is a measure of the separation between classes. SVM seeks to find the hyperplane with the largest margin because it's more likely to generalize well to unseen data. | ||
4) Kernel Trick: SVM can handle non-linear data by transforming it into a higher-dimensional space using a kernel function (e.g., polynomial or radial basis function). | ||
In this space, a linear hyperplane can often separate the data effectively. | ||
5) C Parameter: This regularization parameter controls the trade-off between maximizing the margin and minimizing classification errors. A smaller C value emphasizes a larger margin but | ||
allows some misclassification, while a larger C value aims to minimize misclassification. | ||
|
||
SVM can be used in two ways: | ||
1) Support Vector Machine for Classification: In classification, SVM assigns data points to one of two classes (binary classification) or multiple classes (multiclass classification) based | ||
on which side of the hyperplane they fall. | ||
2) Support Vector Machine for Regression: In regression, SVM predicts a continuous output value. Instead of maximizing the margin, it seeks to fit as many data points within a specified | ||
margin while minimizing the error. | ||
|
||
Example: | ||
Imagine you're a quality control manager at a factory producing computer chips. You need to sort defective and non-defective chips. Using an SVM is like finding the optimal line | ||
(hyperplane) that best separates good and faulty chips based on attributes like size and performance. The support vectors are the most critical chips near the decision boundary. | ||
By maximizing the margin between them, SVM ensures accurate classification. If a new chip arrives, you can quickly determine its quality by checking which side of the hyperplane | ||
it falls on. SVM helps maintain high-quality chip production, reducing wastage and ensuring customer satisfaction. |