**Optimization** with nonstationary, nonlinear monolithic fluid-structure
interaction

**optimization**approach, either optimal control or parameter estimation

**problems**are treated. Expand abstract.

**optimization**settings for nonlinear, nonstationary fluid-structure interaction. The

**problem**is formulated in a monolithic fashion using the arbitrary Lagrangian-Eulerian framework to set-up the fluid-structure forward

**problem**. In the

**optimization**approach, either optimal control or parameter estimation

**problems**are treated. In the latter, the stiffness of the solid is estimated from given reference values. In the numerical solution, the

**optimization**

**problem**is solved with a gradient-based solution algorithm. The nonlinear subproblems of the FSI forward

**problem**are solved with a Newton method including line search. Specifically, we will formally provide the backward-in-time running adjoint state used for gradient computations. Our algorithmic developments are demonstrated with some numerical examples as for instance extensions of the well-known fluid-structure benchmark settings and a flapping membrane test in a channel flow with elastic walls.

4/10 relevant

arXiv

On parametric second-order conic **optimization**

**optimization**

**problem**. Expand abstract.

**optimization**problem, where the objective function is perturbed along a fixed direction. We introduce the notions of nonlinearity interval and transition point of the optimal partition, and we prove that the set of transition points is finite. Additionally, on the basis of Painlev\'e-Kuratowski set convergence, we provide sufficient conditions for the existence of a nonlinearity interval, and we show that the continuity of the primal or dual optimal set mapping might fail on a nonlinearity interval. We then propose, under the strict complementarity condition, an iterative procedure to compute a nonlinearity interval of the optimal partition. Furthermore, under primal and dual nondegeneracy conditions, we show that a transition point can be numerically identified from the higher-order derivatives of the Lagrange multipliers associated with a nonlinear reformulation of the parametric second-order conic

**optimization**

**problem**. Our theoretical results are supported by numerical experiments.

4/10 relevant

arXiv

Discounted Reinforcement Learning Is Not an **Optimization** **Problem**

**optimization**

**problem**in its usual formulation, so when using function approximation there is no optimal policy. Expand abstract.

**optimization**

**problem**in its usual formulation, so when using function approximation there is no optimal policy. We substantiate these claims, then go on to address some misconceptions about discounting and its connection to the average reward formulation. We encourage researchers to adopt rigorous

**optimization**approaches, such as maximizing average reward, for reinforcement learning in continuing tasks.

7/10 relevant

arXiv

PIETOOLS: A Matlab Toolbox for Manipulation and **Optimization** of Partial
Integral Operators

**optimization**

**problems**on PI operators. Expand abstract.

**optimization**

**problem**using a syntax similar to the sdpvar class in YALMIP. Use of the resulting Linear Operator Inequalities (LOIs) are demonstrated on several examples, including stability analysis of a PDE, bounding operator norms, and verifying integral inequalities. The result is that PIETOOLS, packaged with SOSTOOLS and MULTIPOLY, offers a scalable, user-friendly and computationally efficient toolbox for parsing, performing algebraic operations, setting up and solving convex

**optimization**

**problems**on PI operators.

7/10 relevant

arXiv

Scalable Global **Optimization** via Local Bayesian Optimization

**optimization**has recently emerged as a popular method for the sample-efficient

**optimization**of expensive black-box functions. However, the application to high-dimensional

**problems**with several thousand observations remains challenging, and on difficult

**problems**Bayesian

**optimization**is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. This motivates the design of a local probabilistic approach for global

**optimization**of large-scale high-dimensional

**problems**. We propose the $\texttt{TuRBO}$ algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that $\texttt{TuRBO}$ outperforms state-of-the-art methods from machine learning and operations research on

**problems**spanning reinforcement learning, robotics, and the natural sciences.

4/10 relevant

arXiv

Global exponential stability of primal-dual gradient flow dynamics based on the proximal augmented Lagrangian: A Lyapunov-based approach

**problem**s with either affine equality or inequality constraints to a broader class of composite

**optimization**

**problems**with nonsmooth regularizers and it provides a worst-case lower bound of the exponential decay rate. Expand abstract.

**optimization**

**problems**with linear equality constraints, we utilize a Lyapunov-based approach to establish the global exponential stability of the primal-dual gradient flow dynamics based on the proximal augmented Lagrangian. The result holds when the differentiable part of the objective function is strongly convex with a Lipschitz continuous gradient; the non-differentiable part is proper, lower semi-continuous, and convex; and the matrix in the linear constraint is full row rank. Our quadratic Lyapunov function generalizes recent result from strongly convex

**problems**with either affine equality or inequality constraints to a broader class of composite

**optimization**

**problems**with nonsmooth regularizers and it provides a worst-case lower bound of the exponential decay rate. Finally, we use computational experiments to demonstrate that our convergence rate estimate is less conservative than the existing alternatives.

5/10 relevant

arXiv

Discrete Polynomial **Optimization** with Coherent Networks of Condensates
and Complex Coupling Switching

**optimization**

**problems**. Expand abstract.

**optimization**

**problems**. We show how to facilitate the search for the global solution by invoking complex couplings in the system. This approach offers a highly flexible new kind of computation based on gain-dissipative simulators with complex coupling switching.

4/10 relevant

arXiv

How to Evaluate Machine Learning Approaches for Combinatorial
**Optimization**: Application to the Travelling Salesman **Problem**

**optimization**

**problems**, such as the well-known Travelling Salesman

**Problem**(TSP). Expand abstract.

**optimization**is the field devoted to the study and practice of algorithms that solve NP-hard

**problems**. As Machine Learning (ML) and deep learning have popularized, several research groups have started to use ML to solve combinatorial

**optimization**problems, such as the well-known Travelling Salesman

**Problem**(TSP). Based on deep (reinforcement) learning, new models and architecture for the TSP have been successively developed and have gained increasing performances. At the time of writing, state-of-the-art models provide solutions to TSP instances of 100 cities that are roughly 1.33% away from optimal solutions. However, despite these apparently positive results, the performances remain far from those that can be achieved using a specialized search procedure. In this paper, we address the limitations of ML approaches for solving the TSP and investigate two fundamental questions: (1) how can we measure the level of accuracy of the pure ML component of such methods; and (2) what is the impact of a search procedure plugged inside a ML model on the performances? To answer these questions, we propose a new metric, ratio of optimal decisions (ROD), based on a fair comparison with a parametrized oracle, mimicking a ML model with a controlled accuracy. All the experiments are carried out on four state-of-the-art ML approaches dedicated to solve the TSP. Finally, we made ROD open-source in order to ease future research in the field.

5/10 relevant

arXiv

Convex Relaxations for Consensus and Non-Minimal **Problems** in 3D Vision

**Optimization**

**Problems**(POP) from computational algebraic geometry. Expand abstract.

**Optimization**

**Problems**(POP) from computational algebraic geometry. The proposed method exploits the well known Shor's or Lasserre's relaxations, whose theoretical aspects are also discussed. Notably, we further exploit the POP formulation of non-minimal solver also for the generic consensus maximization

**problems**in 3D vision. Our framework is simple and straightforward to implement, which is also supported by three diverse applications in 3D vision, namely rigid body transformation estimation, Non-Rigid Structure-from-Motion (NRSfM), and camera autocalibration. In all three cases, both non-minimal and consensus maximization are tested, which are also compared against the state-of-the-art methods. Our results are competitive to the compared methods, and are also coherent with our theoretical analysis. The main contribution of this paper is the claim that a good approximate solution for many polynomial

**problems**involved in 3D vision can be obtained using the existing theory of numerical computational algebra. This claim leads us to reason about why many relaxed methods in 3D vision behave so well? And also allows us to offer a generic relaxed solver in a rather straightforward way. We further show that the convex relaxation of these polynomials can easily be used for maximizing consensus in a deterministic manner. We support our claim using several experiments for aforementioned three diverse

**problems**in 3D vision.

4/10 relevant

arXiv

Query **Optimization** Properties of Modified VBS

**problems**and

**optimization**

**problem**s. Expand abstract.

**problems**and

**optimization**

**problems**. In this paper after introducing the valuation based system (VBS) framework, we present Markov-like properties of VBS and a method for resolving queries to VBS.

4/10 relevant

arXiv