Clustering AlgorithmsK-meansperiodic attributesSimilarity measuresThe K-means algorithm is very popular in the machine learning community due to its inherent simplicity. However, in its basic form, it is not suitable for use in problems which contain periodic attributes, such as oscillator phase, ...
In this paper, a novel K-means evolving spiking neural network (K-ESNN) model for clustering problems has been presented. K-means has been utilised to improve the original ESNN model. This model enhances the flexibility of the ESNN algorithm in producing better solutions to overcoming the disadv...
K-ESNNK-meansSpiking Neural NetworkIn this paper, a novel K-means evolving spiking neural network (K-ESNN) model for clustering problems has been presented. K-means has been utilised to improve the original ESNN model. This model enhances the flexibility of the ESNN algorithm in producing ...
Numerous research efforts on data clustering have been offered throughout the past decades. To cluster a dataset, there are various solutions to the clustering problem. These methods primarily use complicated network approaches,\(K\)-means, and its improved variants, metaheuristic algorithms, and othe...
We show that the K -median clustering, as well as K -means and the Vector Quantization problems, satisfy these conditions. Our results apply to the ... S Ben-David - 《Machine Learning》 被引量: 80发表: 2007年 On the continuous Weber and k-median problems We give the first exact algo...
Given a set of points \\\(P \\\subset\\\mathbb{R}^{d}\\\) , the k -means clustering problem is to find a set of k centers \\\(C = \\\{ c_{1},\\\ldots,c_{k}\\\}, c_{i} \\\in\\\mathbb{R}^{d}\\\) , such that the objective function ∑ x ∈ P e ( x...
K-means Clustering Traveling Salesman Problem, and Graph Coloring These algorithms have a property similar to ones in –they can all be reduced to any problem in . Because of that, these are in and are at least as hard as any other problem in . A problem can be both in and , which ...
We present a general approach for designing approximation algorithms for a fundamental class of geometric clustering problems in arbitrary dimensions. More specifically, our approach leads to simple randomized algorithms for the k-means, k-median and discrete k-means problems that yield (1+ϵ) ...
In this case, the K-means clustering algorithm is independently applied to minority and majority class instances. This is to identify clusters in the dataset. Subsequently, each cluster is oversampled such that all clusters of the same class have an equal number of instances and all classes have...
Using a dimension-reduction type argument, we are able to greatly simplify earlier results on total sensitivity for the k-median/k-means clustering problems, and obtain positively-weighted epsilon-coresets for several variants of the (j,k)-projective clustering problem. We also extend an earlier ...