We propose a fully online learning algorithms for classification generated from Tikhonov regularization schemes associated with the general strongly convex loss functions. For such a fully online algorithm, the regularization parameter changes in each learning step. This essentially differs from the ...
We investigate the problem of online convex optimization with unknown delays, in which the feedback of a decision arrives with an arbitrary delay. Previous
Making Gradient Descent Optimalfor Strongly Convex Stochastic Optimization(Extended Abstract)Ohad ShamirMicrosoft Research New Englandohadsh@microsoft.comAbstractStochastic gradient descent (SGD) is a simple and popular method to solvestochastic optimization problems which arise in machine learning. For strongl...
Non-convex mixed-integer nonlinear programming: a survey Surv. Oper. Res. Manage. Sci. (2012) E. Carrizosa et al. Multi-group support vector machines with measurement costs: a biobjective approach Discrete Appl. Math. (2008) E. Carrizosa et al. A nested heuristic for parameter tuning in su...
As an application of Theorem 1.3 we prove a result on extension of holomorphic functions from complex submanifolds. Let U be a relatively compact open subset of a holomorphically convex domain V⊂⊂N containing CM and Y⊂V\CM be a closed complex submanifold of V.We set X:=Y∩U.Consi...
[26], [27], although we use it here for a nonconvex optimization. Applied to the LSP and MPG 0-mean partition problems, it gives a subexponential 2O(nlogn) expected running time bound, where n is the number of MAX vertices. The essential properties needed for the analysis of [26], ...
We consider non-stochastic bandit convex optimization with strongly-convex and smooth loss functions. For this problem, Hazan and Levy have proposed an algorithm with a regret bound of (d √ T ) given access to an O(d)-selfconcordant barrier over the feasible region, where d and T stand ...
Adding a small quadratic regularization is a common devise used to tackle non-strongly convex problems; however, it may cause loss of sparsity of solutions or weaken the performance of the algorithms. Avoiding this devise, we propose an accelerated randomized mirror descent method for solving this ...
Next denote by\({{\mathcal {K}}}\)the subclass of\({{\mathcal {S}}}\)consisting of functions which are close-to-convex, i.e., functionsfwhich map\({\mathbb {D}}\)onto a close-to-convex domain, if, and only if, there exist\(0\le \delta \le 2\pi \)and\(g\in {\math...
Adding a small quadratic regularization is a common devise used to tackle non-strongly convex problems; however, it may cause loss of sparsity of solutions or weaken the performance of the algorithms. Avoiding this devise, we propose an accelerated randomized mirror descent method for solving this ...