CHUNFENG SONG, FENG LIU, HUANG YONGZHEN, et al. Auto-encoder based data clustering[ C ]//CIARP2013, PartI, 2013 : 117-124.Chunfeng Song, Feng Liu, Yongzhen Huang, Liang Wang, and Tieniu Tan. Auto-encoder based data clustering. In Progress in Pattern Recognition, Image Analysis, Computer ...
In this paper, we propose a multi-view clustering algorithm based on multiple auto-encoder, named MVC-MAE (see Fig.1). Specially, MVC-MAE first employs multiple auto-encoders to capture the nonlinear structure information in multi-view data and derive the low-dimensional representations of data...
Rest of the expressed genes were grouped based on their median expression. For each set of genes belonging to a bin (found using the bulk data), the fraction of zeros (number of zeros in the set ÷ total count of the set) in the imputed single-cell expression data is reported on...
how However, most focus on clustering over a low-dimensional feature space. Transform the data into more clustering-friendly representations: A deep version of k-means is based on learning a data representation and applying k-means in the embedded space. How to represent a cluster: a vector VS...
Thus, it would be of great interest being able to disclose biological information belonging to cell subpopulations, which can be defined by clustering analysis of scRNAseq data. In this manuscript, we report a tool that we developed for the functional mining of single cell clusters based on ...
So, we’ve integrated both convolutional neural networks and autoencoder ideas for information reduction from image based data. That would be pre-processing step for clustering. In this way, we can apply k-means clustering with 98 features instead of 784 features. This could fasten labeling proce...
So, we’ve integrated both convolutional neural networks and autoencoder ideas for information reduction from image based data. That would be pre-processing step for clustering. In this way, we can apply k-means clustering with 98 features instead of 784 features. This could fasten labeling proce...
4.5 Semi-supervised Learning:What Happens when Labeled Data are Rare 通过改变训练集中有label的数据的比例,来观察利用FoldingNet得到的codeword进行SVM的实验结果。4.6 Effectiveness of the Folding-Based Decoder 通过和一个全连接网络的解码器进行比较,说明FoldingNet中的folding-based decoder的effectiveness。
Energy-based Models Exact-likelihood Models based on normalizing Flows Application 1 Extend to Non-Euclidean Space Flow on the manifold Discrete Distribution/Discrete random variable: model discrete data. Two ways to solve the model discrete data problem. 1) change-of-variables formula. 2) variationa...
CAIR addresses key gaps in the literature by integrating clustering techniques to group similar data points and using autoencoders to reduce dimensionality while retaining critical boundary instances. Unlike conventional methods that focus primarily on either boundary or inner instances, CAIR effectively ...