DC programmingNonsmooth optimizationVariational analysisOptimization methods for difference-of-convex programs iteratively solve convex subproblems to define iterates. Although convex, depending on the problem's structure, these subproblems are very often challenging and require specialized solvers. This work ...
DifferenceOfConvex(DC) FunctionsandDCProgramming SongcanChen Outline 1.ABriefHistory 2.DCFunctionsandtheirProperty 3.Someexamples 4.DCProgramming 5.CaseStudy 6.Ournextwork 1.ABriefHistory •1964,HoangTuy,(incidentallyinhisconvex optimizationpaper), •1979,J.F.Toland,Dualityformulation •1985,Pham...
Paper tables with annotated results for Further properties of the forward-backward envelope with applications to difference-of-convex programming
Due to the use of the ramp loss function, the corresponding objective function is nonconvex, making it more challenging. To overcome this limitation, we formulate our distance metric learning problem as an instance of difference of convex functions (DC) programming. This allows us to design a ...
Mathematical Programming Submit manuscript Francisco Jara-Moroni, Jong-Shi Pang & Andreas Wächter 1336 Accesses 17 Citations Explore all metrics Abstract This paper studies the difference-of-convex (DC) penalty formulations and the associated difference-of-convex algorithm (DCA) for computing ...
We discuss that learning the function in difference of convex form enables us to use difference of convex programming algorithms to find optimal inputs that optimizes the output. For illustration, we apply Convex Concave Procedure, a difference of Convex optimization algorithm which converts the opti...
An approach to supervised distance metric learning based on difference of convex functions programming - bacnguyencong/DML-dc
According to the characteristics of the spectrum for the indefinite kernel matrix, IKSVM-DC decomposes the objective function into the subtraction of two convex functions and thus reformulates the primal problem as a difference of convex functions (DC) programming which can be optimized by the DC ...
Two new penalty methods for sparse reconstruction are proposed based on two types of difference of convex functions (DC for short) programming in which the DC objective functions are the difference of l1 and lσ q norms and the difference of l1 and lr norms with r > 1. By introducing a ...
Moreover, neural network parameterization often transforms the training target to be non-convex or even non-smooth, which may result in the local optima or saddle points [17]. Due to these considerations, the theoretical underpinnings of neural TD learning have garnered attention in recent ...