I hope you have enjoyed and must learn something from my previous post about Cross Validation & ROC.
In this post we are going to learn Principal Component Analysis or POC.
Principal Component Analysis (PCA) is a simple yet popular and useful linear transformation technique that is used in numerous applications, such as stock market predictions, the analysis of gene expression data, and many more.
The main idea of principal component analysis (PCA) is to reduce the dimensionality of a data set consisting of many variables correlated with each other, either heavily or lightly, while retaining the variation present in the dataset, up to the maximum extent.
Importantly, the dataset on which PCA technique is to be used must be scaled. The results are also sensitive to the relative scaling. As a layman, it is a method of summarizing data.
Imagine some wine bottles on a dining table. Each wine is described by its attributes like colour, strength, age, etc. But redundancy will arise because many of them will measure related properties. So what PCA will do in this case is summarize each wine in the stock with less characteristics.
In other words, Principal component analysis is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.
- With PCA we can reduce the dimentions with out losing much information
- PCA also helps to remove the multicollinearity between the variables
Dataset information:-
We will use SPECTF dataset from UCI machine learning repository[download here]
The dataset describes diagnosing of cardiac Single Proton Emission Computed Tomography (SPECT) images. Each of the patients is classified into two categories: normal and abnormal.
The database of 267 SPECT image sets (patients) was processed to extract features that summarize the original SPECT images. As a result, 44 continuous feature pattern was created for each patient.
Let's train a LogisticRegression model and record the time taken to train before applying PCA:-
Standardising the variables:-
Result:-
This cumilative explained variance graph helps us to choose the number of desired principal components.
90% variation in the data is explaining by the first 15 principal components.
Result:-
PCA transforms a set of correlated variables into a set of linearly uncorrelated variables called principal components, we can check the correlarion with a heat map of correlation matrix.
Check the performance after considering the first 15 principal components:-
We can conclude that the computational time is reduced by several times after applying PCA and selecting 15 principal components, And the variables are transformed to a new set of linearly uncorrelated variables.
I like your blog, I read this blog please update more content on hacking,Nice post
ReplyDeleteData Science online Training
propidduozo Shannon Romero download
ReplyDeletewirkruptbunsper