PCA- Principal Component Analysis -Machine Learning, is an statistical techniques for analysis in machine learning. It is an unsupervised learning method that uses orthogonal transformations to find correlations between the data points. PCA is used for linear dimensionality reduction.
It is used in various methods of data analysis and exploratory analysis. The main target of PCA is to reduce the number of variables in a data set by saving the information available in it. Uncorrelated features are generated by PCA and are called as Principal Components. PCA is also very sensitive to feature scaling.
A standardization of data is required if different scales were used for measurement of values in a dataset.
Requirements of Principal Component Analysis -Machine Learning
- Mean
Mean is the average value of a certain set of numbers. It can be calculated by summing up all the numbers in data and then dividing the sum with the total number of data points.
mean= ∑fi*xi/n
where fi=frequency of the datapoint number
and xi is the datapoint number
- Covariance
It is used when the dataset has more than 1 relationship or in other words, the dataset has 2 dimensions. Covariance is the measure that is used to understand the extent to which the data varies from the mean value with respect to each other. Also, covariance(x,y) = covariance(y,x) for features x and y in any dataset.
- Eigen Vectors
Eigenvector is a scalar multiple of any number. Mathematically, for any matrix M, a non-zero vector e is called as an ‘eigenvector’ iff M*e is a scalar multiple of e as showcased below:
> M*e=ß*e
Here ß is a scalar which is called as ‘eigenvalue’. Trace and Determinant in mathematics can be used to determine the eigenvectors and eigenvalues.
- Covariance Matrix
Covariance matrix is used whenever the dataset has 3 or higher dimensionality of dataset.
- Standard Deviation
Standard deviation is used to calculate the spread of numbers in a given dataset in PCA. It can be calculated by finding out the distance of a data point with the mean value.
- Variance
Variance is also the measure of spread of data. It can also be used to relative distance from mean. It can also be calculated by squaring the standard deviation value.
Steps for principal component analysis machine learning
1. Analyze the dataset for applying PCA
2. Calculate the covariance matrix
3. Calculate the Eigen vectors by using the results from covariance matrix.
4. Choose the components to form a feature vector and reduce dimensionality
5. The Eigen vector with highest value is called as the principal component of the dataset.
Also, the number of principal components are also less than the initial features.
Sparse PCA
Sparse PCA is a variant of basic PCA. It proposes a method for the features to be sparse so that reduces the difficulties of PCA to interpret. Sparse components are way faster to compute and also provides better statistical regularization.
Elastic Net can be used used to implement sparse principal component analysis.
Kernel PCA
Kernel PCA is used when the data consists of very complex structures and cannot be represented by using standard PCA technique. It is used for non linear dimensionality reduction
>It maps data to a higher dimensionality
>Transformation of data takes place in later steps of Kernel PCA
>The transformation data is linearly separable in the feature space when plotted on a scatter plot.
>Kernel functions such as Gaussian kernel function and polynomial kernel function can be used in the kernel method of PCA.
Be First to Comment