feature_names) # name the features explainer = KernelExplainer(knn.predict, background, nsamples=100) x = iris.data[inds[102:103],:] visualize(explainer.explain(x)) The above explanation shows three features each contributing to push the model output from the base value (the average model ...
feature_names) # name the features explainer = KernelExplainer(knn.predict, background, nsamples=100) x = iris.data[inds[102:103],:] visualize(explainer.explain(x)) The above explanation shows three features each contributing to push the model output from the base value (the average model ...
shuffle(inds) knn = neighbors.KNeighborsClassifier() knn.fit(iris.data, iris.target == 0) # use Shap to explain a single prediction background = DenseData(iris.data[inds[:100],:], iris.feature_names) # name the features explainer = KernelExplainer(knn.predict, background, nsamples=100...
feature_names) # name the features explainer = KernelExplainer(knn.predict, background, nsamples=100) x = iris.data[inds[102:103],:] visualize(explainer.explain(x)) The above explanation shows three features each contributing to push the model output from the base value (the average model ...
(inds)knn=neighbors.KNeighborsClassifier()knn.fit(iris.data,iris.target==0)# use Shap to explain a single predictionbackground=DenseData(iris.data[inds[:100],:],iris.feature_names)# name the featuresexplainer=KernelExplainer(knn.predict,background,nsamples=100)x=iris.data[inds[102:103],:]...
(inds)knn=neighbors.KNeighborsClassifier()knn.fit(iris.data,iris.target==0)# use Shap to explain a single predictionbackground=DenseData(iris.data[inds[:100],:],iris.feature_names)# name the featuresexplainer=KernelExplainer(knn.predict,background,nsamples=100)x=iris.data[inds[102:103],:]...