sklearn.metrics.normalized_mutual_info_score(labels_true, labels_pred, *, average_method='arithmetic') 两个聚类之间的标准化互信息。 归一化互信息 (NMI) 是互信息 (MI) 分数的归一化,用于在 0(无互信息)和 1(完全相关)之间缩放结果。在此函数中,互信息通过 H(labels_true) 和H(labels_pred)) 的...
‘adjusted_rand_score’metrics.adjusted_rand_score ‘completeness_score’metrics.completeness_score ‘fowlkes_mallows_score’metrics.fowlkes_mallows_score ‘homogeneity_score’metrics.homogeneity_score ‘mutual_info_score’metrics.mutual_info_score ‘normalized_mutual_info_score’metrics.normalized_mutual_info_...
‘adjusted_rand_score’metrics.adjusted_rand_score ‘completeness_score’metrics.completeness_score ‘fowlkes_mallows_score’metrics.fowlkes_mallows_score ‘homogeneity_score’metrics.homogeneity_score ‘mutual_info_score’metrics.mutual_info_score ‘normalized_mutual_info_score’metrics.normalized_mutual_info_...
metrics.fowlkes_mallows_score(labels_true, …) metrics.homogeneity_completeness_v_measure(…) metrics.homogeneity_score(labels_true, …) metrics.mutual_info_score(labels_true, …) metrics.normalized_mutual_info_score(…[, …]) metrics.silhouette_score(X, labels[, …]) metrics.silhouette_samples(...
使用 sklearn 0.20.0,我将提供一个合成示例来重现该问题:metrics.normalized_mutual_info_score([0]*100001, [0]*100000 + [1])metrics.normalized_mutual_info_score([0]*110001, [0]*110000 + [1])我希望下面的答案是 0,但我分别得到了 7.999 和 -7.999。 查看完整描述...
','neg_mean_squared_log_error','neg_median_absolute_error','normalized_mutual_info_score','precision','precision_macro','precision_micro','precision_samples','precision_weighted','r2','recall','recall_macro','recall_micro','recall_samples','recall_weighted','roc_auc','v_measure_score']...
'completeness_score’ metrics.completeness_score 'fowlkes_mallows_score’ metrics.fowlkes_mallows_score 'homogeneity_score’ metrics.homogeneity_score 'mutual_info_score’ metrics.mutual_info_score 'normalized_mutual_info_score’ metrics.normalized_mutual_info_score ...
全部的,mutual_info_score,adjusted_mutual_info_score和normalized_mutual_info_score是 symmetric(对称的): 交换参数不会更改分数。因此,它们可以用作consensus measure: >>>metrics.adjusted_mutual_info_score(labels_pred,labels_true)0.22504... 完美标签得分是 1.0: ...
所有函数,mutual_info_score,adjusted_mutual_info_score和normalized_mutual_info_score都是对称的:交换函数的参数不会改变得分。因此,它们可以用作共识度量(consensus measure): >>>metrics.adjusted_mutual_info_score(labels_pred, labels_true)0.22504... ...
metrics.homogeneity_score(labels_true, ...) 给出了一个地面事实的集群标签的均匀性度量 metrics.mutual_info_score(labels_true, ...) 两个集群之间的相互信息 metrics.normalized_mutual_info_score(...) 两个集群之间的归一化互信息 metrics.silhouette_score(X, labels[, ...]) 计算所有样本的平均轮廓系...