Continuation method for PDE-constrained global **optimization**: Analysis
and application to the shallow water equations

**optimization**

**problems**constrained by discretized nonlinear partial differential equations may be solved to global optimality using an interior point continuation method. Expand abstract.

**optimization**

**problems**constrained by discretized nonlinear partial differential equations may be solved to global optimality using an interior point continuation method. The solution procedure rests on a nested homotopy. The inner homotopy solves a barrier

**problem**by driving the barrier parameter to zero. The outer homotopy makes use of a linear PDE that approximates the nonlinear PDE. This outer homotopy deforms the approximating linear PDE to the nonlinear PDE in a manner that ensures that the discretized constraint gradients remain linearly independent. Provided that the objective is convex and the search space remains path-connected, it is shown how a continuation method applied to the nested homotopy yields globally optimal solutions. As a case study, an appropriate discretization and homotopy for the shallow water equations is presented, together with a numerical experiment that solves a nonconvex numerical optimal control

**problem**to global optimality. The approach is suitable for closed-loop nonconvex model predictive control of large-scale cyber-physical systems.

10/10 relevant

arXiv

Duality of **optimization** **problems** with gauge functions

**optimization**

**problem**, which includes many important

**problems**, such as the absolute-value and the gauge

**optimizations**. Expand abstract.

**optimization**problem, which includes many important problems, such as the absolute-value and the gauge

**optimizations**. They presented a closed form of the dual formulation for the problem, and showed weak duality and the equivalence to the Lagrangian dual under some conditions. In this work, we focus on a special positively homogeneous

**optimization**problem, whose objective function and constraints consist of some gauge and linear functions. We prove not only weak duality but also strong duality. We also study necessary and sufficient optimality conditions associated to the

**problem**. Moreover, we give sufficient conditions under which we can recover a primal solution from a Karush-Kuhn-Tucker point of the dual formulation. Finally, we discuss how to extend the above results to general convex

**optimization**

**problems**by considering the so-called perspective functions.

10/10 relevant

arXiv

Solving polyhedral d.c. **optimization** **problems** via concave minimization

**optimization**

**problems**can be solved by certain concave minimization algorithms. Expand abstract.

**problem**of minimizing the difference of two convex functions is called polyhedral d.c.

**optimization**

**problem**if at least one of the two component functions is polyhedral. We characterize the existence of global optimal solutions of polyhedral d.c.

**optimization**

**problems**. This result is used to show that, whenever the existence of an optimal solution can be certified, polyhedral d.c.

**optimization**

**problems**can be solved by certain concave minimization algorithms. No further assumptions are necessary in case of the first component being polyhedral and just some mild assumptions to the first component are required for the case where the second component is polyhedral. In case of both component functions being polyhedral, we obtain a primal and dual existence test and a primal and dual solution procedure. Numerical examples are discussed.

10/10 relevant

arXiv

Hardness Amplification of **Optimization** **Problems**

**optimization**

**problems**based on the technique of direct products. We say that an

**optimization**

**problem**$\Pi$ is direct product feasible if it is possible to efficiently aggregate any $k$ instances of $\Pi$ and form one large instance of $\Pi$ such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the $k$ smaller instances. Given a direct product feasible

**optimization**

**problem**$\Pi$, our hardness amplification theorem may be informally stated as follows: If there is a distribution $\mathcal{D}$ over instances of $\Pi$ of size $n$ such that every randomized algorithm running in time $t(n)$ fails to solve $\Pi$ on $\frac{1}{\alpha(n)}$ fraction of inputs sampled from $\mathcal{D}$, then, assuming some relationships on $\alpha(n)$ and $t(n)$, there is a distribution $\mathcal{D}'$ over instances of $\Pi$ of size $O(n\cdot \alpha(n))$ such that every randomized algorithm running in time $\frac{t(n)}{poly(\alpha(n))}$ fails to solve $\Pi$ on $\frac{99}{100}$ fraction of inputs sampled from $\mathcal{D}'$. As a consequence of the above theorem, we show hardness amplification of

**problems**in various classes such as NP-hard

**problems**like Max-Clique, Knapsack, and Max-SAT,

**problems**in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even

**problems**in TFNP such as Factoring and computing Nash equilibrium.

10/10 relevant

arXiv

Learning adiabatic quantum algorithms for solving **optimization** **problems**

**optimization**

**problems**with an adiabatic machine assuming restrictions on the class of available

**problem**Hamiltonians. Expand abstract.

**problem**Hamiltonian whose ground state corresponds to the solution of the given

**problem**and an evolution schedule such that the adiabatic condition is satisfied. A correct choice of these elements is crucial for an efficient adiabatic quantum computation. In this paper we propose a hybrid quantum-classical algorithm to solve

**optimization**

**problems**with an adiabatic machine assuming restrictions on the class of available

**problem**Hamiltonians. The scheme is based on repeated calls to the quantum machine into a classical iterative structure. In particular we present a technique to learn the encoding of a given

**optimization**

**problem**into a

**problem**Hamiltonian and we prove the convergence of the algorithm. Moreover the output of the proposed algorithm can be used to learn efficient adiabatic algorithms from examples.

10/10 relevant

arXiv

A generalized projection-based scheme for solving convex constrained
**optimization** **problems**

**problem**s as well as for real-life

**optimization**

**problems**coming from medical treatment planning. Expand abstract.

**optimization**

**problem**. The general idea is to transform the original

**optimization**

**problem**to a sequence of feasibility

**problems**by iteratively constraining the objective function from above until the feasibility

**problem**is inconsistent. For each of the feasibility

**problems**one may apply any of the existing projection methods for solving it. In particular, the scheme allows the use of subgradient projections and does not require exact projections onto the constraints sets as in existing similar methods. We also apply the newly introduced concept of superiorization to

**optimization**formulation and compare its performance to our scheme. We provide some numerical results for convex quadratic test

**problems**as well as for real-life

**optimization**

**problems**coming from medical treatment planning.

10/10 relevant

arXiv

A Molecular Computing Approach to Solving **Optimization** **Problems** via Programmable Microdroplet Arrays

**optimization**

**problems**are mapped to an Ising Hamiltonian and encoded in the form of intra- and inter- droplet interactions. Expand abstract.

**problems**to be solved. By using droplets and room-temperature processes, molecular computing is a promising research direction with potential biocompatibility and cost advantages. In this work, we present a new approach for computation using a network of chemical reactions taking place within an array of spatially localized droplets whose contents represent bits of information. Combinatorial

**optimization**

**problems**are mapped to an Ising Hamiltonian and encoded in the form of intra- and inter- droplet interactions. The

**problem**is solved by initiating the chemical reactions within the droplets and allowing the system to reach a steady-state; in effect, we are annealing the effective spin system to its ground state. We propose two implementations of the idea, which we ordered in terms of increasing complexity. First, we introduce a hybrid classical-molecular computer where droplet properties are measured and fed into a classical computer. Based on the given

**optimization**problem, the classical computer then directs further reactions via optical or electrochemical inputs. A simulated model of the hybrid classical-molecular computer is used to solve boolean satisfiability and a lattice protein model. Second, we propose architectures for purely molecular computers that rely on pre-programmed nearest-neighbour inter-droplet communication via energy or mass transfer.

10/10 relevant

chemRxiv

Efficient partition of integer **optimization** **problems** with one-hot encoding

**optimization**

**problems**with one-hot encoding, partitioning methods that extract subproblems with as many feasible solutions as possible are required. Expand abstract.

**optimization**problems, and D-Wave Systems Inc. has developed hardware for implementing this algorithm. The current version of the D-Wave quantum annealer can solve unconstrained binary

**optimization**

**problems**with a limited number of binary variables, although cost functions of many practical

**problems**are defined by a large number of integer variables. To solve these

**problems**with the quantum annealer, the integer variables are generally binarized with one-hot encoding, and the binarized

**problem**is partitioned into small subproblems. However, the entire search space of the binarized

**problem**is considerably extended compared to that of the original integer

**problem**and is dominated by unfeasible solutions. Therefore, to efficiently solve large

**optimization**

**problems**with one-hot encoding, partitioning methods that extract subproblems with as many feasible solutions as possible are required.

10/10 relevant

arXiv

Euclidean correlations in combinatorial **optimization** **problems**: a
statistical physics approach

**optimization**

**problems**: quantum computing. Expand abstract.

**optimization**problems, from the statistical physics perspective. The starting point are the motivations which brought physicists together with computer scientists and mathematicians to work on this beautiful and deep topic. I give some elements of complexity theory, and I motivate why the point of view of statistical physics leads to many interesting results, as well as new questions. I discuss the connection between combinatorial

**optimization**

**problems**and spin glasses. Finally, I briefly review some topics of large deviation theory, as a way to go beyond average quantities. As a concrete example of this, I show how the replica method can be used to explore the large deviations of a well-known toy model of spin glasses, the p-spin spherical model. In the second chapter I specialize in Euclidean combinatorial

**optimization**

**problems**. In particular, I explain why these problems, when embedded in a finite dimensional Euclidean space, are difficult to deal with. I analyze several specific

**problems**in one dimension to explain a quite general technique to deal with one dimensional Euclidean combinatorial

**optimization**

**problems**. Whenever possible, and in a detailed way for the traveling-salesman-

**problem**case, I also discuss how to proceed in two (and also more) dimensions. In the last chapter I outline a promising approach to tackle hard combinatorial

**optimization**problems: quantum computing. After giving a quick overview of the paradigm of quantum computation, I discuss in detail the application of the so-called quantum annealing algorithm to a specific case of the matching problem, also by providing a comparison between the performance of a recent quantum annealer machine and a classical super-computer equipped with an heuristic algorithm. Finally, I draw the conclusions of my work and I suggest some interesting directions for future studies.

10/10 relevant

arXiv

Guaranteed lower bounds for cost functionals of time-periodic parabolic
**optimization** **problems**

**optimization**

**problems**. Expand abstract.

**problem**. Together with previous results on upper bounds (majorants) for one of the cost functionals, both minorants and majorants lead to two-sided estimates of functional type for the optimal control

**problem**. Both upper and lower bounds are derived for the second new cost functional subject to the same parabolic PDE-constraints, but where the target is a desired gradient. The time-periodic optimal control

**problems**are discretized by the multiharmonic finite element method leading to large systems of linear equations having a saddle point structure. The derivation of preconditioners for the minimal residual method for the new

**optimization**

**problem**is discussed in more detail. Finally, several numerical experiments for both optimal control

**problems**are presented confirming the theoretical results obtained. This work provides the basis for an adaptive scheme for time-periodic

**optimization**

**problems**.

10/10 relevant

arXiv