In this paper, we propose DAC, Deep Autoencoder-based Clustering, a generalized data-driven framework to learn clustering representations using deep neuron networks. Experiment results show that our approach could effectively boost performance of the K-Means clustering algorithm on a variety types of datasets.doi:10.1007/978-3-03...
In this paper, we create a new autoencoder variant to efficiently capture the features of high-dimensional data, and propose an unsupervised deep hashing method for large-scale data retrieval, named as Autoencoder-based Unsupervised Clustering and Hashing (AUCH). By constructing a hashing layer as...
However, most focus on clustering over a low-dimensional feature space. Transform the data into more clustering-friendly representations: A deep version of k-means is based on learning a data representation and applying k-means in the embedded space. How to represent a cluster: a vector VS an...
Another method, drImpute, repeatedly identifies similar cells based on clustering and performs imputation multiple times by averaging the expression values from similar cells. Our approach, AutoImpute is motivated by a similar problem14 of sparse matrix imputation frequently encountered in recommender ...
. If the problem were pixel based one, you might remember thatconvolutional neural networksare more successful than conventional ones. However, we tested it for labeled supervised learning problems. The question is that can I adapt convolutional neural networks to unlabeled images for clustering?
One important characteristic of K-means is that it is a hard clustering method, which will associate each point to one and only one cluster. A limitation of this approach is that there is no uncertainty measure probability that tells us how much a data point is associated with a specific ...
. If the problem were pixel based one, you might remember thatconvolutional neural networksare more successful than conventional ones. However, we tested it for labeled supervised learning problems. The question is that can I adapt convolutional neural networks to unlabeled images for clustering?
(VAMB), a program that uses deep variational autoencoders to encode sequence coabundance andk-mer distribution information before clustering. We show that a variational autoencoder is able to integrate these two distinct data types without any previous knowledge of the datasets. VAMB outperforms ...
Each autoencoder learns to represent input sequences as lower-dimensional, fixed-size vectors. This can be useful for finding patterns among sequences, clustering sequences, or converting sequences into inputs for other algorithms. Cited in ...
Fuzzy C-means clustering algorithm based on stacked sparse autoencoders. Computer Engineering and Applications, 2015, 51 (4) : 154-157. Abstract:In order to solve the sensitivity of fuzzy C-means clustering algorithm to the outlier and the randomly initialized clustering center, the stacked ...