Online learning with kernel regularized least mean square algorithms. Knowledge-Based Systems, 2014,59(2):21-32. [doi: 10.1016/j.knosys.2014.02.005]H. Fan, Q. Song, and S. B. Shrestha, "Online learning with kernel regularized least mean square algorithms," Knowledge-Based Systems, vol. 59, no. 0, pp. 21 - 32,...
Evaluations of Kernel HAP Algorithms We also conduct several experiments to test the poten- tials of the kernel HAP algorithms in Zero-Shot Learning. The Gaussian kernel and Cauchy kernel are applied. Fig- ure 4 shows the ZSL performances of these kernel HAP al- gorithms on the USAA and ...
Several algorithms based on the least-mean square (LMS) (Gu et al. 2009; Chen et al. 2009) and the recursive least squares (RLS) (Babadi et al. 2010; Angelosante et al. 2010; Eksioglu 2011; Eksioglu and Tanc 2011) techniques have been reported with different penalty or shrinkage ...
Apart from 2 or 3 exceptions, all of the gene sets identified under the various algorithms with a nominal unadjusted p-value of 0.05, including the 10 gene sets determined by GSEA-limma, are a subset of the 50 gene sets located with RCMAT. These results also largely encompass the results...
Kriegel, H.P., Kröger, P., Schubert, E., Zimek, A.: A general framework for increasing the robustness of PCA-based correlation clustering algorithms. In: Scientific and Statistical Database Management. Lecture Notes in Computer Science, vol. 5069, pp. 418–435 (2008) Kwak, N.: Princip...
regularized least square method [20]. Their experiments on publicly available GEP datasets have shown that MSRC is efficient for cancer classification and can achieve higher accuracy than many existing representative schemes such as SVM, SRC and least absolute shrinkage and selection operator (LASSO) ...
9 / 86 Background Today, we introduce two Stata packages: LASSOPACK (including lasso2, cvlasso & rlasso) implements penalized regression methods: LASSO, elastic net, ridge, square-root LASSO, adaptive LASSO. uses fast path-wise coordinate descent algorithms (Friedman et al., 2007). three ...
Besides the above prediction algorithms, some novel algorithms based on maximum-likelihood7,9,10 have been proposed. For the hierarchical structure of networks, Clauset et al. 9 proposed a model to infer hierarchical structure from network and applied it to solve the link prediction problem. ...
Finally, we use the example of SAG for solving least square regression to demonstrate the benefit of data preconditioning. Similar analysis carries on to other variance reduced stochastic optimization algorithms (Johnson and Zhang 2013; Shalev-Shwartz and Zhang 2013). When \(\lambda =1/n\) the ...
• MvSL methods outperform other algorithms under all cases. On the one hand, MvSL methods do not need the datasets obey gaussian distribution; on the other hand, MvSL methods utilize the partial label information to construct a graph embedding framework, which encour- aged items of the ...