To tackle this problem, we propose manifold learning with structured subspace for multi-label feature selection. Specifically, we first uncover a latent subspace for a more compact and accurate data representation, and take advantage of the subspace to explore the correlations among instances. Then,...
It is well-known that exploiting label correlations is important to multi-label learning. Existing approaches either assume that the label correlations are global and shared by all instances; or that the label correlations are local and shared only by a data subset. In fact, in the real-worl...
The proposed algorithm, cost-sensitive label embedding with multidimensional scaling (CLEMS), approximates the cost information with the distances of the embedded vectors by using the classic multidimensional scaling approach for manifold learning. CLEMS is able to deal with both symmetric and asymmetric...
A geometric framework for transfer learning using manifold alignment Many machine learning problems involve dealing with a large amount of high-dimensional data across diverse domains. In addition, annotating or labeling the... C Wang - Dissertations & Theses - Gradworks 被引量: 23发表: 2010年 Mac...
Furthermore, most embedded MFS methods often only consider feature similarity through manifold learning concepts, neglecting the impact of feature redundancy, leading to suboptimal performance. In addition, their weight matrix is usually derived from a single perspective of the loss function. To address...
Semi-supervised multi-label feature selection with adaptive structure learning and manifold learning High-dimensional multi-label data brings challenges and difficulties in multi-label learning. Therefore, feature selection as an effective dimension reduct... S Lv,S Shi,H Wang,... - Knowledge-Based ...
K-fold cross-validation is used to mitigate overfitting in many Machine Learning problems, especially for those who have small training data. This is because usingK-fold to validate a model can better estimate how the results of the model will be generalized to an independent data set, ...
The success of semi-supervised learning crucially relies on the scalability to a huge amount of unlabelled data that are needed to capture the underlying m... R Petegrosso,W Zhang,Z Li,... 被引量: 3发表: 2017年 Robust Ranking Kernel Support Vector Machine via Manifold Regularized Matrix Fac...
LE can be considered to be a manifold learning method. Given n data points {xi}i=1n in a high-dimensional space RM, the original LE maps them into points {zi}i=1n in a low-dimensional space Rm (m≪M) on the basis of the neighboring relationship represented by {wij(≥0)}i,j=...
Feature selection, as an important pre-processing technique, can efficiently mitigate the issue of “the curse of dimensionality” by selecting discriminative features especially for multi-label learning, a discriminative feature subset can improve the classification accuracy. The existing feature selection ...