N. Balakrishnan, "Global optimality of approximate dynamic programming and its use in non-convex function minimization," Applied Soft Computing, vol. 24, pp. 291-303, 2014.Heydari, A., and Balakrishnan, S.N., "Global Optimality of Approximate Dynamic Programming and its use in Non-convex ...
定义1(Convex function):一个函数 f 被称为凸函数,如果对任意 x,y ,满足 (0.1)f(y)≥f(x)+⟨∇f(x),y−x⟩ 定义2(Strong convexity):一个函数 f 被称为 μ -强凸函数,如果对任意 x,y ,满足 (0.2)f(y)≥f(x)+⟨∇f(x),y−x⟩+μ2‖y−x‖2 (别小看后面多出...
答主了解过一点Black Box function 的optimization问题,也算是nonconvex的一个分支。一般这种Black Box fu...
The function φ(·) is referred to as the penalty function (or regularization function). If φ(x) = λ|x|, then (4) is the same as (1). For sparse signal processing, φ(x) should be chosen so as to promote sparsity of x. It is common to ... I Selesnick 被引量: 22发表: ...
P. Zhong, "Training robust support vector regression with smooth non-convex loss function", Optimization Methods and Software, vol. 6, no. 27, (2012).P. Zhong, "Training robust support vector regression with smooth non-convex loss function," Optimization Methods and Software, vol. 27, no. ...
摘要: It is shown that a convex function, defined on an arbitrary, possibly finite, subset of a linear space, can be extended to the whole space. An application to decision making under risk is given.DOI: 10.1016/0165-1765(86)90242-9 ...
In this paper, we propose a robust scheme for least squares support vector regression (LS-SVR), termed as RLS-SVR, which employs non-convex least squares loss function to overcome the limitation of LS-SVR that it is sensitive to outliers. Non-convex loss gives a constant penalty for any ...
whereℓis a smooth data fidelity function,is a convex function,, andis a regularization parameter. A common use of this problem formulation is the regularized empirical risk-minimization problem in high-dimensional statistics, or the variational regularization technique in inverse problems. Non-negativ...
作者提出了一个基于Moreau envelope的merit function。通过这个技术,他们对一系列非光滑的随机算法给出了...
L(d,w):=C(x,u)+λγ(Ev)其中λ是penalty weight,γ是penalty function. 然后是trust region方法,它和linearization方法配合使用。对原问题中的非线性部分进行了线性化之后,我还要额外加上一个约束,就是迭代只在一个局部的region中可行:‖w‖∞≤Δ,这样我们就可以保证线性化能有一个比较高的质量,防止infeasi...