PCA人脸识别

Love The Way You Lie 2022-08-03 14:54 207阅读 0赞

PCA方法由于其在降维和特征提取方面的有效性,在人脸识别领域得到了广泛的应用。
其基本原理是:利用K-L变换抽取人脸的主要成分,构成特征脸空间,识别时将测试图像投影到此空间,得到一组投影系数,通过与各个人脸图像比较进行识别。
进行人脸识别的过程,主要由训练阶段和识别阶段组成:

训练阶段

第一步:写出训练样本矩阵,其中向量xi为由第i个图像的每一列向量堆叠成一列的MN维列向量,即把矩阵向量化。假设训练集有200个样本,,由灰度图组成,每个样本大小为M*N。
这里写图片描述
第二步:计算平均脸

Ψ=1200∑i=1i=200x**i

第三步:计算差值脸,计算每一张人脸与平均脸的差值

d**i=x**i−Ψ,i=1,2,…,200

第四步:构建协方差矩阵

C=1200∑i=1200didiT=1200AAT

第五步:求协方差矩阵的特征值和特征向量,构造特征脸空间
求出 ATA 的特征值 λ**i 及其正交归一化特征向量 ν**i ,根据特征值的贡献率选取前p个最大特征值及其对应的特征向量,贡献率是指选取的特征值的和与占所有特征值的和比,即:

ϕ=∑i=1i=pλii=1i=200λ**ia

若选取前 p 个最大的特征值,则“特征脸”空间为:

w=(u1,u2,…u**p)

第六步:将每一幅人脸与平均脸的差值脸矢量投影到“特征脸”空间,即

Ωi=wTd**i(i=1,2,…,200)

识别阶段

第一步:将待识别的人脸图像Γ与平均脸的差值脸投影到特征脸空间,得到其特征向量表示:

ΩΓ=w**T(Γ−Ψ)

第二布:采用欧式距离来计算 ΩΓ 与每个人脸的距离 ε**i

ε**i2=∥∥Ωi−ΩΓ∥∥2(i=1,2,…,200)

求最小值对应的训练集合中的标签号作为识别结果
需要说明的是协方差矩阵 AAT 的维数为MN*MN,其维数是比较较大的,而我们在这里的训练样本个数为200, ATA 的维数为200*200小了许多,实际情况中,采用奇异值分解(SingularValue Decomposition ,SVD)定理,通过求解 ATA 的特征值和特征向量来组成特征脸空间的。必须明白的是特征脸空间是由 ATA 的子空间构成,我们的识别任务也是将原始 ATA 所构成的空间投影到我们选取前p个最大的特征值对应的特征向量组成的子空间里,进行比较,选取最近的训练样本为标号。

代码

训练过程代码如下:

  1. function [m, A, Eigenfaces] = EigenfaceCore(T)
  2. % Use Principle Component Analysis (PCA) to determine the most
  3. % discriminating features between images of faces.
  4. %
  5. % Description: This function gets a 2D matrix, containing all training image vectors
  6. % and returns 3 outputs which are extracted from training database.
  7. %
  8. % Argument: T - A 2D matrix, containing all 1D image vectors.
  9. % Suppose all P images in the training database
  10. % have the same size of MxN. So the length of 1D
  11. % column vectors is M*N and 'T' will be a MNxP 2D matrix.
  12. %
  13. % Returns: m - (M*Nx1) Mean of the training database
  14. % Eigenfaces - (M*Nx(P-1)) Eigen vectors of the covariance matrix of the training database
  15. % A - (M*NxP) Matrix of centered image vectors
  16. %
  17. % See also: EIG
  18. % Original version by Amir Hossein Omidvarnia, October 2007
  19. % Email: aomidvar@ece.ut.ac.ir
  20. %%%%%%%%%%%%%%%%%%%%%%%% Calculating the mean image
  21. m = mean(T,2); % Computing the average face image m = (1/P)*sum(Tj's) (j = 1 : P)
  22. Train_Number = size(T,2);
  23. %%%%%%%%%%%%%%%%%%%%%%%% Calculating the deviation of each image from mean image
  24. A = [];
  25. for i = 1 : Train_Number
  26. temp = double(T(:,i)) - m; % Computing the difference image for each image in the training set Ai = Ti - m
  27. A = [A temp]; % Merging all centered images
  28. end
  29. %%%%%%%%%%%%%%%%%%%%%%%% Snapshot method of Eigenface methos
  30. % We know from linear algebra theory that for a PxQ matrix, the maximum
  31. % number of non-zero eigenvalues that the matrix can have is min(P-1,Q-1).
  32. % Since the number of training images (P) is usually less than the number
  33. % of pixels (M*N), the most non-zero eigenvalues that can be found are equal
  34. % to P-1. So we can calculate eigenvalues of A'*A (a PxP matrix) instead of
  35. % A*A' (a M*NxM*N matrix). It is clear that the dimensions of A*A' is much
  36. % larger that A'*A. So the dimensionality will decrease.
  37. L = A'*A; % L is the surrogate of covariance matrix C=A*A'.
  38. [V D] = eig(L); % Diagonal elements of D are the eigenvalues for both L=A'*A and C=A*A'.
  39. %%%%%%%%%%%%%%%%%%%%%%%% Sorting and eliminating eigenvalues
  40. % All eigenvalues of matrix L are sorted and those who are less than a
  41. % specified threshold, are eliminated. So the number of non-zero
  42. % eigenvectors may be less than (P-1).
  43. L_eig_vec = [];
  44. for i = 1 : size(V,2)
  45. if( D(i,i)>4e+07)
  46. L_eig_vec = [L_eig_vec V(:,i)];
  47. end
  48. end
  49. %%%%%%%%%%%%%%%%%%%%%%%% Calculating the eigenvectors of covariance matrix 'C'
  50. % Eigenvectors of covariance matrix C (or so-called "Eigenfaces")
  51. % can be recovered from L's eiegnvectors.
  52. Eigenfaces = A * L_eig_vec; % A: centered image vectors

识别过程代码如下:

  1. function OutputName = Recognition(TestImage, m, A, Eigenfaces)
  2. % Recognizing step....
  3. %
  4. % Description: This function compares two faces by projecting the images into facespace and
  5. % measuring the Euclidean distance between them.
  6. %
  7. % Argument: TestImage - Path of the input test image
  8. %
  9. % m - (M*Nx1) Mean of the training
  10. % database, which is output of 'EigenfaceCore' function.
  11. %
  12. % Eigenfaces - (M*Nx(P-1)) Eigen vectors of the
  13. % covariance matrix of the training
  14. % database, which is output of 'EigenfaceCore' function.
  15. %
  16. % A - (M*NxP) Matrix of centered image
  17. % vectors, which is output of 'EigenfaceCore' function.
  18. %
  19. % Returns: OutputName - Name of the recognized image in the training database.
  20. %
  21. % See also: RESHAPE, STRCAT
  22. % Original version by Amir Hossein Omidvarnia, October 2007
  23. % Email: aomidvar@ece.ut.ac.ir
  24. %%%%%%%%%%%%%%%%%%%%%%%% Projecting centered image vectors into facespace
  25. % All centered images are projected into facespace by multiplying in
  26. % Eigenface basis's. Projected vector of each face will be its corresponding
  27. % feature vector.
  28. ProjectedImages = [];
  29. Train_Number = size(Eigenfaces,2);
  30. for i = 1 : Train_Number
  31. temp = Eigenfaces'*A(:,i); % Projection of centered images into facespace
  32. ProjectedImages = [ProjectedImages temp];
  33. end
  34. %%%%%%%%%%%%%%%%%%%%%%%% Extracting the PCA features from test image
  35. InputImage = imread(TestImage);
  36. temp = InputImage(:,:,1);
  37. [irow icol] = size(temp);
  38. InImage = reshape(temp',irow*icol,1);
  39. Difference = double(InImage)-m; % Centered test image
  40. ProjectedTestImage = Eigenfaces'*Difference; % Test image feature vector
  41. %%%%%%%%%%%%%%%%%%%%%%%% Calculating Euclidean distances
  42. % Euclidean distances between the projected test image and the projection
  43. % of all centered training images are calculated. Test image is
  44. % supposed to have minimum distance with its corresponding image in the
  45. % training database.
  46. Euc_dist = [];
  47. for i = 1 : Train_Number
  48. q = ProjectedImages(:,i);
  49. temp = ( norm( ProjectedTestImage - q ) )^2;
  50. Euc_dist = [Euc_dist temp];
  51. end
  52. [Euc_dist_min , Recognized_index] = min(Euc_dist);
  53. OutputName = strcat(int2str(Recognized_index),'.jpg');

其中训练样本的最后一行代码:

  1. Eigenfaces = A * L_eig_vec; % A: centered image vectors

和识别过程的

  1. temp = Eigenfaces'*A(:,i); % Projection of centered images into facespace

写成公式也就是如下

T=(A**V)T**A=V**T(ATA)

这里 V**T 是新坐标系下的基,投影的结果也就是新坐标系下的系数。基于PCA的人脸识别也就是我们在新的坐标系下比较两个向量的距离。完整代码下载地址点击 这里。

Licenses
















作者 日期 联系方式
风吹夏天 2015年8月10日 wincoder@qq.com

发表评论

表情:
评论列表 (有 0 条评论,207人围观)

还没有评论,来说两句吧...

相关阅读

    相关 Matlab PCA+SVM人脸识别(一)

    概述: 编程平台:Matlab; 数据: ORL人脸库。pgm格式的图片。40人,每人10幅图,图像大小为112\92像素。 图像本身已经经过处理,不需要进行归一化和校

    相关 PCA人脸识别

    PCA方法由于其在降维和特征提取方面的有效性,在人脸识别领域得到了广泛的应用。 其基本原理是:利用K-L变换抽取人脸的主要成分,构成特征脸空间,识别时将测试图像投影到此空间

    相关 基于PCA和SVM的人脸识别

    svm推广到多类情况 一对多的最大响应策略(one against all) 假设有A 、B、C.. D四类样本需要划分。在抽取训练集的时候,分别按照如下4种方式划分