sklearn依然实现了LDA类方法,我们只需要直接调用而无需自己实现内部逻辑,这样显得更加方便。所以,10行代码实现也不为过,重点需要先理解内部逻辑原理。关键代码如下:lda = LDA(n_components=2)lr = LogisticRegression()x_train_lda = lda.fit_transform(x_train_std, y_train) # LDA是有监督方法,需要用...
完整代码参加我的github: https://github.com/ljpzzz/machinelearning/blob/master/classic-machine-learning/lda.ipynb 我们首先生成三类三维特征的数据,代码如下: import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib inline from sklearn.datasets.samples_gener...
完整代码参加我的github: https:///ljpzzz/machinelearning/blob/master/classic-machine-learning/lda.ipynb 我们首先生成三类三维特征的数据,代码如下: import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib inline from sklearn.datasets.samples_generator import...
from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA fr...
使用sklearn工具包中调用的线性判别分析降维 from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA# LDAsklearn_lda = LDA(n_components=2)X_lda_sklearn = sklearn_lda.fit_transform(X, y)def plot_scikit_lda(X, title):ax = plt.subplot(111)for label,marker,color in zip(rang...
51CTO博客已为您找到关于sklearn LDA 使用的相关内容,包含IT学习相关文档代码介绍、相关教程视频课程,以及sklearn LDA 使用问答内容。更多sklearn LDA 使用相关解答可以来51CTO博客参与分享和学习,帮助广大IT技术人实现成长和进步。
from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA ...
四、完整代码 fromsklearn.linear_modelimportLogisticRegressionfromsklearn.discriminant_analysisimportLinearDiscriminantAnalysis as LDAfromsklearn.preprocessingimportStandardScalerfromsklearn.model_selectionimporttrain_test_splitfrommatplotlib.colorsimportListedColormapimportmatplotlib.pyplot as pltimportpandas as pdimport...
使用sklearn的make_classification产生两个分类的随机数据,可视化如下: 经过LDA之后,我们将每个数据的标签可视化出来: 可以看到LDA算法将我们的数据集很好的分开了,由此可以说明LDA是有效的。 思考 这里有个最大的缺点是这里的算法只能处理二分类,要处理多分类的话,需要在这个算法的基础上进行推广。请看明日推文。
对上述结果使用sklearn官方实现的LDA进行对比验证: if __name__ == "__main__": X, y = make_data([[2.0, 1.0], [15.0, 5.0], [31.0, 12.0]], [1.0, 3.0, 2.5], n_features=4) print(X.shape) lda = MyLDA() eig_vecs = lda.fit(X, y) # 取前2个最大特征值对应的特征向量 W =...