A main contribution is the development of a sequential convex optimization algorithm, where at each iteration step, a convex subproblem with linear matrix inequality (LMI) constraints is solved. The set of feas
In this Lecture, several recent methods based on convex approximation schemes are discussed, that have demonstrated strong potential for efficient solution of structural optimization problems. First, the now well established “Approximation Concepts”...
At each iteration of the algorithm, a convex subproblem is constructed by forming a nonlinear, convex approximation of the penalized compliance based on a linearization of the stiffness matrix. Subsequent solutions of the convex subproblems form a non-increasing sequence of compliance values. The ...
In this section, we first provide the optimality conditions of the nonlinear optimization problem P1 in the context of interior point methods and then prove all the basic SCP (Algorithm 1), Newton-type rSCP (Algorithm 2), and inexact Newton-type rSCP (Algorithm 3) can solve for a local opt...
This paper introduces sequential convex programming (SCP), a local optimzation method for solving nonconvex optimization problems. A full-step SCP algorithm is presented. Under mild conditions the local convergence of the algorithm is proved as a main result of this paper. An application to optimal...
A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. In other words: Supervised Learning learns from a set of labeled examples. From the instances and the labels, supervised learning models try to find the correla...
Approximating the Pareto-hull of a convex set by polyhedral sets The decision-making problem in the case of several criteria is examined. An algorithm for representing the set of effective points in criterial space based... OL Chernykh - Pergamon Press, Inc. 被引量: 28发表: 1995年 Pareto ...
A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. In other words: Supervised Learning learns from a set of labeled examples. From the instances and the labels, supervised learning models try to find the correla...
The MPC strategy for driving a chaser to a non-cooperative satellite is investigated with OCP enforced by a spherical obstacle constraint in [6], where the quadratic constraint is approximated as a time-varying linear inequality and the relaxed OCP is solved by QP algorithm. Linear approximations...
concept, which aims to achieve a flexible and efficient recursive approximation solution of the output weights by employing ingenious partition solving strategy and approximate calculation method, thus significantly improving the learning efficiency of the algorithm while preserving satisfactory learning accuracy...