Cloud Studio代码运行 clf.score(X_train,y_train)))clf.score(X_test,y_test))) 5.1 用make_regression数据(无噪音)进行线性回归 代码语言:javascript 代码运行次数:0 复制 Cloud Studio代码运行 defLinearRegression_for_make_regression():myutil=util()X,y=make_regression(n_samples=100,n_features=1,n_...
for depth in max_depth_range: clf = DecisionTreeClassifier(max_depth = depth, random_state = 0) clf.fit(X_train, Y_train) score = clf.score(X_test, Y_test) accuracy.append(score) 由于下图显示当max_depth大于或等于3时,准确度最高,所以最好设置max_depth = 3才最简单。 选择max_depth...
lr.score(X_test,y_test) 结果 test accuracy为92.5% 运行时间为9s,非常快。 5.svm model = SVC(kernel='linear', C=100) clf = model.fit(X_train, y_train) predict_result = clf.predict(X_test) cm_plot(y_test, predict_result).show(); #显示混淆矩阵可视化结果 clf.score(X_test,y_test...
score_clf=clf.score(X_test,y_test)# 决策树性能评价结果 score_rfc=rfc.score(X_test,y_test)# 随机森林性能评价结果 #3.输出两个模型的预测结果print("单个决策树的分类预测结果:{}\n".format(score_clf),"随机森林分类预测结果:{}\n".format(score_rfc)) 由此可以看出随机森林算法的预测精度明显高于...
def score(self, X_test, y_test): right_count = 0 n = 10 for X, y in zip(X_test, y_test): label = self.predict(X) if label == y: right_count += 1 return right_count / len(X_test) clf = KNN(X_train, y_train) ...
result = clf.score(X_test,y_test)#导入测试集,从接口中调用需要的信息 2.DecisionTreeClassifier classsklearn.tree.DecisionTreeClassifier(criterion=’gini’, splitter=’best’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, ...
print("Test set accuracy: {:.2f}".format(clf.score(x_test, y_test)))fig, axes = plt.subplots(1, 3, figsize=(10, 3))for n_neighbors, ax in zip([1, 2, 8], axes):#fit方法返回对象本身,所以我们可以将实例化和拟合放在一行代码中 clf = KNeighborsClassifier(n_neighbors=n_neighbors)...
result = clf.score(X_test,y_test) #对我们训练的模型精度进行打分 1. 2. 3. 4. 分类树 DecisionTreeClassifier class sklearn.tree.DecisionTreeClassifier ( criterion=’gini’, splitter=’best’, max_depth=None,min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_featur...
clf = CategoricalNB(alpha=1) clf.fit(X_train, y_train) acc = clf.score(X_test, y_test) # 评估 print("Test Acc : %.3f" % acc) 1. 2. 3. 4. 5. Acc大大降低了,只有个0.65,没办法,毕竟是个随机数据集,所以不具备太多但是为了满足自己的小小心思,写了个for循环,简单设置随机种子取1至...
clf=GaussianNB()#使用训练集对模型进行训练clf.fit(X_train,y_train)GaussianNB(priors=None)#使用测试集数据检验模型准确率clf.score(X_test,y_test)#给一组数据[5.9,3.2,5.1,2.1]进行预测clf.predict([[5.9,3.2,5.1,2.1]])