You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LDA is a type of dimensionality reduction technique in machine learning.
6
+
7
+
A dimensionality reduction technique is used to find a linear combination of features that best represents the original features. The linear combination of features is fewer than the original number of features, making the data easier to work with. The outcome of dimensionality reduction is often used as an input to the next step in a data science pipeline.
8
+
9
+
Other types of dimensionality reduction techniques include Principal Components Analysis (PCA). The biggest difference between PCA and LDA is that LDA uses the class labels for observations in the dataset while PCA does not.
10
+
11
+
LDA is particularly useful in situations where you have labeled data and want to maximize the separation between classes.
12
+
13
+
LDA does make some assumptions and is sensitive to outliers. Assumptions include:
14
+
15
+
Normality: The data is normally distributed (Gaussian).
16
+
Homoscedasticity: Different classes have the same covariance matrix.
17
+
Independence: Observations are independent.
18
+
No multicollinearity: The more correlation there is between features, the less predictive LDA will be.
19
+
Use cases
20
+
Specific use cases for LDA include:
21
+
22
+
Facial recognition: Photos of faces include lots of pixels. LDA is often used to reduce the number of features before the data is used for prediction.
23
+
Medical diagnosis: In medical imaging, LDA can be applied to identify patterns in data related to different medical conditions.
24
+
Document classification: LDA can be used to classify documents into different categories based on their content.
0 commit comments