Graph learning has been an active research area that finds applications across a number of fields including finance, health care, and social sciences. In this chapter, we present an overview of recent advanceme
2. As can be seen, the graph suggests the value function V(x) be convex away from the origin (we set ℓ=1×10−5 in Definition 1.1), evidence that finds support in the result of Theorem 3.1. Download: Download high-res image (88KB) Download: Download full-size image Fig. 2. ...
whereλandγare nonnegative parameters, andPλ,γ(|ui|) is a non-convex function. We absorbλinto the penalty because it cannot be decoupled from several penalties, such as the SCAD function38. We here have seven penalties and they are described in Table1and visualized in Fig.1, where f...
Intuitively, recall that any point on the line between two arbitrary points of a convex function will be above that function. In more formal terms, a continuous segment (that is, a straight line) connecting two arbitrary points on the graph of the objective function will not go below the ob...
In [1] some qualitative properties of domains admitting solutions to (1.2) were shown, in particular when Ω is the hypograph of a C2 function v:ω→R, i.e. Ω=Ωv={(x′,xN)∈RN | x′∈ω, xN∈(0,v(x′))}. (1.4) Here ΓΩv is just the cartesian graph of v over ω...
Peng, B., Wang, L.: An iterative coordinate descent algorithm for high-dimensional nonconvex penalized quantile regression. J. Compu. Graph. Stat. 24 , 676–694 (2015) MathSciNetPeng, B., Wang, L.: An iterative coordinate descent algorithm for high-dimensional nonconvex penalized quantile ...
It is challenging to directly find the optimal solution of the CHLNMF model since its objective function is non-convex. Therefore, using Semi-quadratic programming theory to solve the objective functionO_{CHLNMF}to find the optimal solution. The primary concept is to add an auxiliary variable an...
Let f : Rp → R be a µ-strongly convex function and Θ ⊆ Rp be a convex set. Let θ⋆ be the minimizer of f on Θ. Then, the following condition holds for all θ in Θ: f (θ) ≥ f (θ⋆) + µ 2 θ − θ⋆ 22. We now introduce a sequence of ...
In this paper, we propose a generalized framework for developing learning-rate-free momentum stochastic gradient descent (SGD) methods in the minimization of nonsmooth nonconvex functions, especially in training nonsmooth neural networks. Our framework adaptively generates learning rates based on the histo...
A general non-convex low-rank regularization (NCLR) for structured compressive imaging (SCI) can be expressed as follows:(6)rank(L)≈‖L‖NCLR=∑ig(σi(L),γ), where g(⋅):[0,+∞)→[0,+∞) denotes the non-convex surrogate function of the rank, σi(L) is the i-th singular...