Skip to content

Commit

Permalink
Merge branch 'master' into lsa
Browse files Browse the repository at this point in the history
  • Loading branch information
SmirkCao authored Jun 12, 2019
2 parents 8c35c42 + 878a790 commit 5d62428
Show file tree
Hide file tree
Showing 6 changed files with 244 additions and 12 deletions.
37 changes: 34 additions & 3 deletions CH16/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,11 @@
- $y_k=\alpha_k^\mathrm{T}\boldsymbol{x}$考虑PCA是通过组合特征的方法来降维,这样用到**线性组合**。因为涉及到线性组合,所以在PCA过程中首先要给数据规范化就好理解了,也比较好理解数据的"结构"这种说法。
- 书中有提到在实际问题中,不同变量可能有不同的量纲,直接求主成分有时会产生不合理的结果。**消除这个影响**常对各个随机变量实施规范化,使其均值为0,方差为1。
- 关于主成分的性质,规范化的变量总体主成分主要是围绕特征值和特征向量展开的。
- 关于总体和样本的说明可以参考一下Strang的书[^1]中第十二章部分说明。
- 关于$k$的选择,2000年有一个文章自动选择[^2]

## 内容

### 总体主成分分析

#### 总体主成分性质
Expand All @@ -37,8 +40,6 @@
2. $\sum\limits_{i=1}^m\lambda_i=\sum\limits_{i=1}^m\sigma_{ii}$
3. $\sum\limits_{i=1}^m \mathrm{var}(x_i)=\mathrm{tr}(\mit{\Sigma}^\mathrm{T}\mathrm)=tr\mathrm(A\Lambda A^\mathrm{T}\mathrm)=\mathrm{tr}\mathrm(\Lambda\mathrm)=\sum\limits_{i=1}^m\lambda_i=\sum\limits_{i=1}^m \mathrm{var}\mathrm(y_i\mathrm)$



两个拉格朗日函数的求导

#### 规范化变量的总体主成分
Expand Down Expand Up @@ -121,11 +122,41 @@ $$

### 例16.1

这个例子,其实从表16.3中拿到的结论通过表16.2也能拿到。$y_1$是原始特征的线性组合,并且,各个原始特征的权重(系数)基本相同,说明大家同样重要。$y_1$和总成绩有关系。
这个例子,其实从表16.3中拿到的结论通过表16.2也能拿到。就是说通过单位特征向量和主成分的方差贡献率可以得到通过主成分的因子负荷量以及贡献率能得到的结论。
$y_1$是原始特征的线性组合,并且,各个原始特征的权重(系数)基本相同,说明大家同样重要。$y_1$和总成绩有关系。
$y_2$的贡献可能更多的体现在文理科的差异上,他们的作用相反。

| 类型 | 主成分 | 特征值 | $x_1$ | $x_2$ | $x_3$ | $x_4$ | 方差贡献率 | 备注|
| --- | --- | --- | --- | --- | --- | --- | --- |---|
| 特征向量 |$y_1$| 2.17 | 0.460 | 0.476 | 0.523 | 0.537 | 0.543 ||
| 特征向量 |$y_2$| 0.87 | 0.574 | 0.486 | -0.476 | -0.456 | 0.218 |累计0.761|
| 因子负荷量|$y_1$|2.17|0.678|0.701|0.770|0.791|$\sqrt{\lambda_1}e_{i1}$|平方和=2.169|
| 因子负荷量|$y_2$|0.87|0.536|0.697|0.790|0.806|$\sqrt{\lambda_2}e_{i2}$|平方和=0.870|

这部分数值参考书上内容,如果用numpy做,会有一定出入,回头再复核下。

### 习题16.1
样本数据主成分分析
$$
X=
\left[
\begin{matrix}
2& 3& 3& 4& 5& 7\\
2& 4& 5& 5& 6& 8
\end{matrix}
\right]
$$

这个题,原来就俩特征,然后主成分依然俩特征。俩特征就可以可视化了。
1. 首先要规范化,参考16.33,注意,规范化并不代表数据取值范围在$[0, 1]$之间。

![e1601](assets/ex1601.png)

这里总共就两个特征,而且从数据的范围上看,差不多。


## 参考

[^1]: [Introduction to Linear Algebra](https://github.com/J-Mourad/Introduction-to-Linear-Algebra-5th-Edition---EE16A/raw/master/Ed%205%2C%20Gilbert%20Strang%20-%20Introduction%20to%20Linear%20Algebra%20(2016%2C%20Wellesley-Cambridge%20Press).pdf)
[^2]: [Automatic choice of dimensionality for PCA](https://papers.nips.cc/paper/1853-automatic-choice-of-dimensionality-for-pca.pdf)

Binary file added CH16/assets/ex1601.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 4 additions & 0 deletions CH16/data/eta1601.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
0.44, 1, 0.35, 0.32
1, 0.44, 0.29, 0.33
0.29, 0.35, 1, 0.60
0.33, 0.32, 0.60, 1
59 changes: 59 additions & 0 deletions CH16/pca.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
#! /usr/bin/env python
#! -*- coding=utf-8 -*-
# Project: Lihang
# Filename: pca
# Date: 5/31/19
# Author: 😏 <smirk dot cao at gmail dot com>
# from svd import SVD # someday
import numpy as np


class PCA(object):
def __init__(self, n_components=2):
self.n_components_ = n_components
self.explained_variance_ratio_ = None
self.singular_values_ = None
self.u = None
self.vh = None
self.components_ = None

def __str__(self,):
rst = "PCA algorithms:\n"
rst += "n_components: " + str(self.n_components_)
return rst

def fit(self, x):
# check n_components and min(n_samples, n_features)
n = x.shape[0]
assert n > 1
assert (np.mean(x, axis=1) == np.zeros(n)).all()
x_ = x.T/np.sqrt(n-1)
# mxk kxk kxn: m features , k components, n samples
u, s, vh = np.linalg.svd(x_, full_matrices=False)
self.vh = vh
self.u = u
self.singular_values_ = s
self.explained_variance_ratio_ = s**2/np.sum(s**2)
# print("u:\n", self.u)
# print("s:\n", self.singular_values_)
# print("vh:\n", self.vh)

# sign flip
# sign of keep largest value is positive
max_abs_cols = np.argmax(np.abs(u), axis=0)
signs = np.sign(u[max_abs_cols, range(u.shape[1])])
u *= signs
vh *= signs[:, np.newaxis]
# print(s)
# print(u)
# print(vh)

# print("max abs cols:\n", max_abs_cols)
# print("max abs cols sign:\n", signs[:, np.newaxis])
self.u = u
self.vh = vh

def fit_transform(self, x):
self.fit(x)
self.components_ = np.dot(self.vh, x)
return self.components_
154 changes: 145 additions & 9 deletions CH16/unit_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,30 +9,166 @@
# Ref to : https://code.visualstudio.com/docs/python/unit-testing
import unittest
from sklearn import datasets
from pca import PCA as smirkpca
from sklearn.decomposition import PCA as skpca
import matplotlib.pyplot as plt
import numpy as np


class TestPCAMethods(unittest.TestCase):
def test_e_16_1(self):
R = np.array([[1, 0.44, 0.29, 0.33],
r = np.array([[1, 0.44, 0.29, 0.33],
[0.44, 1, 0.35, 0.32],
[0.29, 0.35, 1, 0.60],
[0.33, 0.32, 0.60, 1]])
ev, sigma = np.linalg.eig(R)

ev, sigma = np.linalg.eig(r)

print("\n")
print(40*"*"+"Engine Values"+40*"*")
print(ev)

print(40*"*"+"eta"+40*"*")
denominator = np.sum(ev)
for numerator in ev:
print(np.round(numerator/denominator, 3))

for idx in range(ev.shape[0]):
print(np.round(ev[idx], 3))
print(np.round(sigma[:, idx], 3))
print(np.round(np.sqrt(ev[idx])*sigma[:, idx], 3))

print(0.678**2+0.701**2+0.770**2+0.791**2)
print("engine value: ", np.round(ev[idx], 3))
print("engine vector: ", np.round(sigma[:, idx], 3))
print("factor loading, rho(xi,yj): ",
np.round(np.sqrt(ev[idx])*sigma[:, idx], 3))
print("factor loading sum: ",
np.round(np.sum(ev[idx]*sigma[:, idx]**2), 3))
print(40*"*"+"svd"+40*"*")
u, s, vh = np.linalg.svd(r)
print(u)
print(s)
print(vh)
# s 特征值, vh 特征向量

def test_ex1601(self):
# raw data
x = np.array([[2, 3, 3, 4, 5, 7],
[2, 4, 5, 5, 6, 8]])

# normalization
x_star = x-np.mean(x, axis=1).reshape(-1, 1)
# x_star = (x-np.mean(x, axis=1).reshape(-1, 1)) / \
# np.sqrt(np.var(x, axis=1)).reshape(-1, 1)
print("means:\n", np.mean(x, axis=1))
print("variances:\n", np.var(x, axis=1))
print("x_star:\n", x_star)
print("x_star means:\n", np.mean(x_star, axis=1))
print("x_star variances:\n", np.var(x_star, axis=1))

# x_ = x_star.T/np.sqrt(x_star.shape[0]-1)
x_ = x_star.T
u, s, vh = np.linalg.svd(x_, full_matrices=False)
print("x_:\n", x_)
print("u:\n", u)
print("s:\n", s)
print("vh:\n", vh)

# full_matrices=True
# u, s, vh = np.linalg.svd(x_)
# s = s*np.eye(2)
# y = np.dot(s, vh[:2, :])
# rst = np.dot(u[:, :2], y)

rst = np.dot(u*s, vh)
print("rst: \n", rst)

# 两个特征可以做可视化
plt.figure(figsize=(5, 5))
plt.xlim((-5, 5))
plt.ylim((-5, 5))
plt.grid()
plt.scatter(x_star[0, :], x_star[1, :], label="raw data", alpha=0.3)
rst = np.dot(vh, x_star)
plt.scatter(rst[0, :], rst[1, :], label="svd based pca", alpha=0.3)

# signs handling
max_abs_cols = np.argmax(np.abs(u), axis=0)
signs = np.sign(u[max_abs_cols, range(u.shape[1])])
u *= signs
vh_signs = vh*signs[:, np.newaxis]
rst = np.dot(vh_signs, x_star)
plt.scatter(rst[0, :], rst[1, :], marker="o", s=12, alpha=0.3,
label="svd based pca with signs")

# for sklearn pca comparison
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
# pca in sklearn, (n_samples, n_features)
rst = pca.fit_transform(x.T)
print("sklearn pca:\n", rst)

plt.scatter(rst[:, 0], rst[:, 1], label="sklearn_rst", alpha=0.3)
plt.legend()

axis_x = np.array([[0, 2],
[0, 0]])
axis_y = np.array([[0, 0],
[0, 2]])
plt.plot(axis_x[0, :], axis_x[1, :],
linestyle="-", c="red")
plt.plot(axis_y[0, :], axis_y[1, :],
linestyle="-", c="blue")

plt.plot(np.dot(vh, axis_x)[0, :], np.dot(vh, axis_x)[1, :],
linestyle="--", c="red", alpha=0.3)
plt.plot(np.dot(vh, axis_y)[0, :], np.dot(vh, axis_y)[1, :],
linestyle="--", c="blue", alpha=0.3)

plt.plot(np.dot(vh_signs, axis_x)[0, :],
np.dot(vh_signs, axis_x)[1, :],
linestyle="-.", c="red")
plt.plot(np.dot(vh_signs, axis_y)[0, :],
np.dot(vh_signs, axis_y)[1, :],
linestyle="-.", c="blue")
plt.savefig("ex1601.png")
plt.show()

def test_pca(self):
"""
PCA分析
"""
print("\n")
# raw data from ex1601
x = np.array([[2, 3, 3, 4, 5, 7],
[2, 4, 5, 5, 6, 8]])
# 去掉均值
x = x-np.mean(x, axis=1).reshape(-1, 1)
print(x)
assert (np.mean(x, axis=1) == np.zeros(2)).all()

# for sklearn x.shape == (n_samples, n_features)
pca_sklearn = skpca(n_components=2)
pca_sklearn.fit(x.T)
pca_sklearn_rst = pca_sklearn.fit_transform(x.T).T

print("\n")
print(40*"*"+"sklearn_pca"+40*"*")
print("singular values:\n", pca_sklearn.singular_values_)
print("explained variance ratio:\n",
pca_sklearn.explained_variance_ratio_)
print("transform:\n", )

print(40*"*"+"smirk_pca"+40*"*")
pca_test = smirkpca(n_components=2)
pca_test_rst = pca_test.fit_transform(x)
print("singular values:\n",
pca_test.singular_values_)
print("explained variance ratio:\n",
pca_test.explained_variance_ratio_)
print("transform:\n", pca_test_rst)

self.assertTrue(np.allclose(pca_sklearn.singular_values_,
pca_test.singular_values_))
self.assertTrue(np.allclose(pca_sklearn_rst, pca_test_rst))
self.assertTrue(np.allclose(pca_sklearn.explained_variance_ratio_,
pca_test.explained_variance_ratio_))

def test_pca_get_fig(self):
pass
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,8 +261,10 @@
## CH16 主成分分析

- 利用正交变换将线性相关变量表示的观测数据转换为少数几个由线性无关变量表示的数据,线性无关的变量称为**主成分**
- 主成分分析之前,需要对给定数据规范化,使得每一个变量均值为0,方差为1。
- 主成分并不对应原始数据的某一个特征, 可以通过因子负荷量来观察主成分与原始特征之间的关系。
- 这部分内容,还没有提到**话题**这个概念,后面章节开始介绍了很多话题分析相关的内容,LSA,PLSA,LDA都是和话题有关,MCMC是在LDA中使用的一个工具。
- 提到了总体主成分和样本主成分,前者是后者的基础。主要体现在总体考虑期望,样本考虑均值。样本主成分具有和总体主成分一样的性质。

## CH17 潜在语义分析

Expand Down

0 comments on commit 5d62428

Please sign in to comment.