Deep-Learning-Enabled Simulated Annealing for Topology **Optimization**

**optimization**

**problems**. Expand abstract.

**optimization**by distributing materials in a domain requires stochastic optimizers to solve highly complicated

**problems**. However, solving such

**problems**requires millions of finite element calculations with hundreds of design variables or more involved , whose computational cost is huge and often unacceptable. To speed up computation, here we report a method to integrate deep learning into stochastic

**optimization**algorithm. A Deep Neural Network (DNN) learns and substitutes the objective function by forming a loop with Generative Simulated Annealing (GSA). In each iteration, GSA uses DNN to evaluate the objective function to obtain an optimized solution, based on which new training data are generated; thus, DNN enhances its accuracy and GSA could accordingly improve its solution in next iteration until convergence. Our algorithm was tested by compliance minimization

**problems**and reduced computational time by over two orders of magnitude. This approach sheds light on solving large multi-dimensional

**optimization**

**problems**.

7/10 relevant

arXiv

Neural network with data augmentation in multi-objective prediction of multi-stage pump

**optimization**

**problems**of multistage pump for next optimization and generalized to finite element analysis optimization

**problem**s in future. Expand abstract.

**problems**. The model with data augmentation can triple the data by interpolation at each sample point of different attributes. It shows that the performance of neural network model with data augmentation is better than former neural network model. Therefore, the prediction ability of NN is enhanced without more simulation costs. With data augmentation it can be a better prediction model used in solving the

**optimization**

**problems**of multistage pump for next

**optimization**and generalized to finite element analysis

**optimization**

**problems**in future.

7/10 relevant

arXiv

Accelerating Quantum Approximate **Optimization** Algorithm using Machine
Learning

**optimization**

**problems**. Expand abstract.

**optimization**algorithm (QAOA) implementation which is a promising quantum-classical hybrid algorithm to prove the so-called quantum supremacy. In QAOA, a parametric quantum circuit and a classical optimizer iterates in a closed loop to solve hard combinatorial

**optimization**

**problems**. The performance of QAOA improves with increasing number of stages (depth) in the quantum circuit. However, two new parameters are introduced with each added stage for the classical optimizer increasing the number of

**optimization**loop iterations. We note a correlation among parameters of the lower-depth and the higher-depth QAOA implementations and, exploit it by developing a machine learning model to predict the gate parameters close to the optimal values. As a result, the

**optimization**loop converges in a fewer number of iterations. We choose graph MaxCut

**problem**as a prototype to solve using QAOA. We perform a feature extraction routine using 100 different QAOA instances and develop a training data-set with 13,860 optimal parameters. We present our analysis for 4 flavors of regression models and 4 flavors of classical optimizers. Finally, we show that the proposed approach can curtail the number of

**optimization**iterations by on average 44.9% (up to 65.7%) from an analysis performed with 264 flavors of graphs.

4/10 relevant

arXiv

Hybridization of interval methods and evolutionary algorithms for
solving difficult **optimization** **problems**

**problems**, for which few solutions were available in the literature. Expand abstract.

**optimization**is dedicated to finding a global minimum in the presence of rounding errors. The only approaches for achieving a numerical proof of global optimality are interval branch and bound methods that interleave branching of the search-space and pruning of the subdomains that cannot contain an optimal solution. It is of the utmost importance: i) to compute sharp enclosures of the objective function and the constraints on a given subdomain; ii) to find a good approximation (an upper bound) of the global minimum. State-of-the-art solvers are generally integrative methods, that is they embed local

**optimization**algorithms to compute a good upper bound of the global minimum over each subspace. In this document, we propose a cooperative framework in which interval methods cooperate with evolutionary algorithms. The latter are stochastic algorithms in which a population of candidate solutions iteratively evolves in the search-space to reach satisfactory solutions. Evolutionary algorithms, endowed with operators that help individuals escape from local minima, are particularly suited for difficult

**problems**on which traditional methods struggle to converge. Within our cooperative solver Charibde, the evolutionary algorithm and the interval-based algorithm run in parallel and exchange bounds, solutions and search-space via message passing. A novel strategy prevents premature convergence toward local minima. A comparison of Charibde with state-of-the-art solvers (GlobSol, IBBA, Ibex) on a benchmark of difficult

**problems**shows that Charibde converges faster by an order of magnitude. New optimality results are provided for five multimodal problems, for which few solutions were available in the literature. Finally, we provide the first numerical proof of optimality for the open Lennard-Jones cluster

**problem**with five atoms.

10/10 relevant

arXiv

Strong Evaluation Complexity Bounds for Arbitrary-Order **Optimization** of
Nonconvex Nonsmooth Composite Functions

**optimization**

**problems**. These apply in both standard smooth and composite non-smooth settings, and additionally allow convex or inexpensive constraints. An adaptive regularization algorithm is then proposed to find such approximate minimizers. Under suitable Lipschitz continuity assumptions, whenever the feasible set is convex, it is shown that using a model of degree $p$, this algorithm will find a strong approximate q-th-order minimizer in at most ${\cal O}\left(\max_{1\leq j\leq q}\epsilon_j^{-(p+1)/(p-j+1)}\right)$ evaluations of the problem's functions and their derivatives, where $\epsilon_j$ is the $j$-th order accuracy tolerance; this bound applies when either $q=1$ or the

**problem**is not composite with $q \leq 2$. For general non-composite problems, even when the feasible set is nonconvex, the bound becomes ${\cal O}\left(\max_{1\leq j\leq q}\epsilon_j^{-q(p+1)/p}\right)$ evaluations. If the

**problem**is composite, and either $q > 1$ or the feasible set is not convex, the bound is then ${\cal O}\left(\max_{1\leq j\leq q}\epsilon_j^{-(q+1)}\right)$ evaluations. These results not only provide, to our knowledge, the first known bound for (unconstrained or inexpensively-constrained) composite

**problems**for optimality orders exceeding one, but also give the first sharp bounds for high-order strong approximate $q$-th order minimizers of standard (unconstrained and inexpensively constrained) smooth problems, thereby complementing known results for weak minimizers.

4/10 relevant

arXiv

Automatic Repair of Convex **Optimization** **Problems**

**optimization**

**problem**involving the minimization of a convex regularization function of the parameters, subject to the constraint that the parameters result in a solvable problem. Expand abstract.

**optimization**problem, a natural question to ask is: what is the smallest change we can make to the problem's parameters such that the

**problem**becomes solvable? In this paper, we address this question by posing it as an

**optimization**

**problem**involving the minimization of a convex regularization function of the parameters, subject to the constraint that the parameters result in a solvable

**problem**. We propose a heuristic for approximately solving this

**problem**that is based on the penalty method and leverages recently developed methods that can efficiently evaluate the derivative of the solution of a convex cone program with respect to its parameters. We illustrate our method by applying it to examples in optimal control and economics.

9/10 relevant

arXiv

Weakly Homogeneous **Optimization** **Problems**

**optimization**

**problems**whose objective functions are weakly homogeneous relative to the constrain sets. Expand abstract.

**optimization**

**problems**whose objective functions are weakly homogeneous relative to the constrain sets. Two sufficient conditions for nonemptiness and boundedness of solution sets are established. We also study linear parametric

**problems**and upper semincontinuity of the solution map.

9/10 relevant

arXiv

A New Multi-Agent Approach for Solving **Optimization** **Problems** with High-Dimensional: Case Study in Email Spam Detection

**optimization**

**problems**. Expand abstract.

**problems**in the real world which cannot be solved through the common traditional methods. The metaheuristic algorithms have been developed as successful techniques for solving a variety of complex and difficult

**optimization**

**problems**. Notwithstanding their advantages, these algorithms may turn out to have weak points such as lower population diversity and lower convergence rate when facing complex high-dimensional

**problems**. An appropriate approach to solve such

**problems**is to apply multi-agent systems along with the metaheuristic algorithms. The present paper proposes a new approach based on the multi-agent systems and the concept of agent, which is named Multi-Agent Metaheuristic (MAMH) method. In the proposed approach, several basic and powerful metaheuristic algorithms, including Genetic Algorithm (GA), Particle Swarm

**Optimization**(PSO), Artificial Bee Colony (ABC), Firefly Algorithm (FA), Bat Algorithm (BA), Flower Pollination Algorithm (FPA), Gray Wolf Optimizer (GWO), Whale

**Optimization**Algorithm (WOA), Crow Search Algorithm (CSA), Farmland Fertility Algorithm (FFA), are considered as separate agents each of which sought to achieve its own goals while competing and cooperating with others to achieve the common goals. In overall, the proposed method was tested on 32 complex benchmark functions, the results of which indicated effectiveness and powerfulness of the proposed method for solving the high-dimensional

**optimization**

**problems**. In addition, in this paper, the binary version of the proposed approach, called Binary MAMH (BMAMH), was executed on the spam email dataset. According to the results, the proposed method exhibited a higher precision in detection of the spam emails compared to other metaheuristic algorithms and methods.

10/10 relevant

Preprints.org

Curiosities and counterexamples in smooth convex **optimization**

**optimization**

**problems**in the smooth convex coercive setting are provided. We show that block-coordinate, steepest descent with exact search or Bregman descent methods do not generally converge. Other failures of various desirable features are established: directional convergence of Cauchy's gradient curves, convergence of Newton's flow, finite length of Tikhonov path, convergence of central paths, or smooth Kurdyka-Lojasiewicz inequality. All examples are planar. These examples are based on general smooth convex interpolation results. Given a decreasing sequence of positively curved C k convex compact sets in the plane, we provide a level set interpolation of a C k smooth convex function where k $\ge$ 2 is arbitrary. If the intersection is reduced to one point our interpolant has positive definite Hessian, otherwise it is positive definite out of the solution set. Furthermore , given a sequence of decreasing polygons we provide an interpolant agreeing with the vertices and whose gradients coincide with prescribed normals.

4/10 relevant

arXiv

A generalization of multiplier rules for infinite-dimensional
**optimization** **problems**

**optimization**

**problems**with a finite number of inequality constraints and with a finite number of inequality and equality constraints. Expand abstract.

**optimization**

**problems**with a finite number of inequality constraints and with a finite number of inequality and equality constraints. Our assumptions on the differentiability of the functions are weaker than those of existing results.

10/10 relevant

arXiv