From feature selection to continues **optimization**

**optimization**

**problems**consisting of millions of parameters. Expand abstract.

**optimization**

**problems**consisting of millions of parameters. Feature selection is the main adopted concepts in MaNet that helps the algorithm to skip irrelevant or partially relevant evolutionary information and uses those which contribute most to the overall performance. The introduced model is applied on several unimodal and multimodal continues

**problems**. The experiments indicate that MaNet is able to yield competitive results compared to one of the best hand-designed algorithms for the aforementioned problems, in terms of the solution accuracy and scalability.

5/10 relevant

arXiv

Minimum size generating partitions and their application to demand
fulfillment **optimization** **problems**

**problem**of finding the minimum size partition for which the set of partitions this partition can generate contains all size-$k$ partitions of $n$. We describe how this result can be applied to solving a class of combinatorial

**optimization**

**problems**.

10/10 relevant

arXiv

An inexact proximal augmented Lagrangian framework with arbitrary
linearly convergent inner solver for composite convex **optimization**

**problem**termination rule for composite convex

**optimization**

**problems**. We consider arbitrary linearly convergent inner solver including in particular stochastic algorithms, making the resulting framework more scalable facing the ever-increasing

**problem**dimension. Each subproblem is solved inexactly with an explicit and self-adaptive stopping criterion, without requiring to set an a priori target accuracy. When the primal and dual domain are bounded, our method achieves $O(1/\sqrt{\epsilon})$ and $O(1/{\epsilon})$ complexity bound in terms of number of inner solver iterations, respectively for the strongly convex and non-strongly convex case. Without the boundedness assumption, only logarithm terms need to be added and the above two complexity bounds increase respectively to $\tilde O(1/\sqrt{\epsilon})$ and $\tilde O(1/{\epsilon})$, which hold both for obtaining $\epsilon$-optimal and $\epsilon$-KKT solution. Within the general framework that we propose, we also obtain $\tilde O(1/{\epsilon})$ and $\tilde O(1/{\epsilon^2})$ complexity bounds under relative smoothness assumption on the differentiable component of the objective function. We show through theoretical analysis as well as numerical experiments the computational speedup possibly achieved by the use of randomized inner solvers for large-scale

**problems**.

4/10 relevant

arXiv

Shape **optimization** for interface identification in nonlocal models

**optimization**

**problems**constrained by nonlocal equations which involve interface-dependent kernels. Expand abstract.

**optimization**methods have been proven useful for identifying interfaces in models governed by partial differential equations. Here we consider a class of shape

**optimization**

**problems**constrained by nonlocal equations which involve interface-dependent kernels. We derive a novel shape derivative associated to the nonlocal system model and solve the

**problem**by established numerical techniques.

4/10 relevant

arXiv

A consensus-based global **optimization** method for high dimensional
machine learning **problems**

**optimization**

**problems**. Expand abstract.

**optimization**method, proposed in [R. Pinnau, C. Totzeck, O. Tse and S. Martin, Math. Models Methods Appl. Sci., 27(01):183--204, 2017], which is a gradient-free

**optimization**method for general non-convex functions. We first replace the isotropic geometric Brownian motion by the component-wise one, thus removing the dimensionality dependence of the drift rate, making the method more competitive for high dimensional

**optimization**

**problems**. Secondly, we utilize the random mini-batch ideas to reduce the computational cost of calculating the weighted average which the individual particles tend to relax toward. For its mean-field limit--a nonlinear Fokker-Planck equation--we prove, in both time continuous and semi-discrete settings, that the convergence of the method, which is exponential in time, is guaranteed with parameter constraints {\it independent} of the dimensionality. We also conduct numerical tests to high dimensional

**problems**to check the success rate of the method.

7/10 relevant

arXiv

Encoding Selection for Solving Hamiltonian Cycle **Problems** with ASP

**optimization**

**problems**to have alternative equivalent encodings in ASP. Expand abstract.

**optimization**

**problems**to have alternative equivalent encodings in ASP. Typically none of them is uniformly better than others when evaluated on broad classes of

**problem**instances. We claim that one can improve the solving ability of ASP by using machine learning techniques to select encodings likely to perform well on a given instance. We substantiate this claim by studying the hamiltonian cycle

**problem**. We propose several equivalent encodings of the

**problem**and several classes of hard instances. We build models to predict the behavior of each encoding, and then show that selecting encodings for a given instance using the learned performance predictors leads to significant performance gains.

4/10 relevant

arXiv

RUN-CSP: Unsupervised Learning of Message Passing Networks for Binary
Constraint Satisfaction **Problems**

**optimization**

**problems**such as the maximum independent set

**problem**(Max-IS). Expand abstract.

**problems**form an important and wide class of combinatorial search and

**optimization**

**problems**with many applications in AI and other areas. We introduce a recurrent neural network architecture RUN-CSP (Recurrent Unsupervised Neural Network for Constraint Satisfaction Problems) to train message passing networks solving binary constraint satisfaction

**problems**(CSPs) or their

**optimization**versions (Max-CSP). The architecture is universal in the sense that it works for all binary CSPs: depending on the constraint language, we can automtically design a loss function, which is then used to train generic neural nets. In this paper, we experimentally evaluate our approach for the 3-colorability

**problem**(3-Col) and its

**optimization**version (Max-3-Col) and for the maximum 2-satisfiability

**problem**(Max-2-Sat). We also extend the framework to work for related

**optimization**

**problems**such as the maximum independent set

**problem**(Max-IS). Training is unsupervised, we train the network on arbitrary (unlabeled) instances of the

**problems**. Moreover, we experimentally show that it suffices to train on relatively small instances; the resulting message passing network will perform well on much larger instances (at least 10-times larger).

5/10 relevant

arXiv

A Compressed Coding Scheme for Evolutionary Algorithms in Mixed-Integer
Programming: A Case Study on Multi-Objective Constrained Portfolio
**Optimization**

**optimization**

**problems**, is adopted frequently for these

**problem**s. Expand abstract.

**optimization**problems, is adopted frequently for these

**problems**. In this work, we discuss the coding scheme for MOEA in MINLP, and the major discussion focuses on the constrained portfolio

**optimization**problem, which is a classic financial

**problem**and could be naturally modeled as MINLP. As a result, the challenge, faced by a direct coding scheme for MOEA in MINLP, is pointed out that the searching in multiple search spaces is very complicated. Thus, a Compressed Coding Scheme (CCS), which converts an MINLP

**problem**into a continuous problem, is proposed to address this challenge. The analyses and experiments on 20 portfolio benchmark instances, of which the number of available assets ranging from 31 to 2235, consistently indicate that CCS is not only efficient but also robust for dealing with the constrained multi-objective portfolio

**optimization**.

8/10 relevant

arXiv

On optimum design of frame structures

**optimization**

**problems**and conclude that the local optimization approaches may indeed converge to local optima, without any solution quality measure, or even to infeasible points. Expand abstract.

**Optimization**of frame structures is formulated as a~non-convex

**optimization**problem, which is currently solved to local optimality. In this contribution, we investigate four

**optimization**approaches: (i) general non-linear optimization, (ii) optimality criteria method, (iii) non-linear semidefinite programming, and (iv) polynomial

**optimization**. We show that polynomial

**optimization**solves the frame structure

**optimization**to global optimality by building the (moment-sums-of-squares) hierarchy of convex linear semidefinite programming problems, and it also provides guaranteed lower and upper bounds on optimal design. Finally, we solve three sample

**optimization**

**problems**and conclude that the local

**optimization**approaches may indeed converge to local optima, without any solution quality measure, or even to infeasible points. These issues are readily overcome by using polynomial optimization, which exhibits a finite convergence, at the prize of higher computational demands.

7/10 relevant

arXiv

Learning adiabatic quantum algorithms for solving **optimization** **problems**

**optimization**

**problems**with an adiabatic machine assuming restrictions on the class of available

**problem**Hamiltonians. Expand abstract.

**problem**Hamiltonian whose ground state corresponds to the solution of the given

**problem**and an evolution schedule such that the adiabatic condition is satisfied. A correct choice of these elements is crucial for an efficient adiabatic quantum computation. In this paper we propose a hybrid quantum-classical algorithm to solve

**optimization**

**problems**with an adiabatic machine assuming restrictions on the class of available

**problem**Hamiltonians. The scheme is based on repeated calls to the quantum machine into a classical iterative structure. In particular we present a technique to learn the encoding of a given

**optimization**

**problem**into a

**problem**Hamiltonian and we prove the convergence of the algorithm. Moreover the output of the proposed algorithm can be used to learn efficient adiabatic algorithms from examples.

10/10 relevant

arXiv