# Introduction to Principal Components Analysis and Beyond

It is always a good idea to reduce the dimensionality of your feature if your feature dimension is too high, or if you want to visualize your data. PCA (principal components analysis) is a well-known dimension reduction technique. It is unsupervised and nested, meaning it can be done with out known what the feature is for and further compression of your feature can be obtained from previous reduction in stead of start from original data. The latter property may seems not useful, but it makes the reduction routine robust.

The secrete of doing unsupervised transformation is to use the property of the data itself. Variance is one indication of interest on the data.

# Dimension Reduction

For many machine learning beginners, the logistic function ${1\over 1 + \exp(-x)}$ seems quite unnatural, with its various usages in machine learning, it is often confusing why we want to use it, here I will give a brief motivation for logistic function.