Found 1527 results, showing the newest relevant preprints. Sort by relevancy only.Update me on new preprints

Large-Scale Traffic Signal Offset **Optimization**

Recently, offset

**optimization**was formulated into a continuous optimization**problem**without integer variables by modeling traffic flow as sinusoidal. Expand abstract. The offset

**optimization****problem**seeks to coordinate and synchronize the timing of traffic signals throughout a network in order to enhance traffic flow and reduce stops and delays. Recently, offset**optimization**was formulated into a continuous**optimization****problem**without integer variables by modeling traffic flow as sinusoidal. In this paper, we present a novel algorithm to solve this new formulation to near-global optimality on a large-scale. Specifically, we solve a convex relaxation of the nonconvex**problem**using a tree decomposition reduction, and use randomized rounding to recover a near-global solution. We prove that the algorithm always delivers solutions of expected value at least 0.785 times the globally optimal value. Moreover, assuming that the topology of the traffic network is "tree-like", we prove that the algorithm has near-linear time complexity with respect to the number of intersections. These theoretical guarantees are experimentally validated on the Berkeley, Manhattan, and Los Angeles traffic networks. In our numerical results, the empirical time complexity of the algorithm is linear, and the solutions have objectives within 0.99 times the globally optimal value.3 days ago

4/10 relevant

arXiv

4/10 relevant

arXiv

2D Eigenvalue **Problems** I: Existence and Number of Solutions

A two dimensional eigenvalue problem (2DEVP) of a Hermitian matrix pair $(A, C)$ is introduced in this paper. Expand abstract.

A two dimensional eigenvalue

**problem**(2DEVP) of a Hermitian matrix pair $(A, C)$ is introduced in this paper. The 2DEVP can be viewed as a linear algebraic formulation of the well-known eigenvalue**optimization****problem**of the parameter matrix $H(\mu) = A - \mu C$. We present fundamental properties of the 2DEVP such as the existence, the necessary and sufficient condition for the finite number of 2D-eigenvalues and variational characterizations. We use eigenvalue**optimization****problems**from the quadratic constrained quadratic program and the computation of distance to instability to show their connections with the 2DEVP and new insights of these**problems**derived from the properties of the 2DEVP.3 days ago

7/10 relevant

arXiv

7/10 relevant

arXiv

Property Testing of LP-Type **Problems**

Given query access to a set of constraints $S$, we wish to quickly check if some objective function $\varphi$ subject to these constraints is at most a given value $k$. Expand abstract.

Given query access to a set of constraints $S$, we wish to quickly check if some objective function $\varphi$ subject to these constraints is at most a given value $k$. We approach this

**problem**using the framework of property testing where our goal is to distinguish the case $\varphi(S) \le k$ from the case that at least an $\epsilon$ fraction of the constraints in $S$ need to be removed for $\varphi(S) \le k$ to hold. We restrict our attention to the case where $(S, \varphi)$ are LP-Type**problems**which is a rich family of combinatorial**optimization****problems**with an inherent geometric structure. By utilizing a simple sampling procedure which has been used previously to study these problems, we are able to create property testers for any LP-Type**problem**whose query complexities are independent of the number of constraints. To the best of our knowledge, this is the first work that connects the area of LP-Type**problems**and property testing in a systematic way. Among our results is a tight upper bound on the query complexity of testing clusterability with one cluster considered by Alon, Dar, Parnas, and Ron (FOCS 2000). We also supply a corresponding tight lower bound for this**problem**and other LP-Type**problems**using geometric constructions.3 days ago

4/10 relevant

arXiv

4/10 relevant

arXiv

Solving machine learning **optimization** **problems** using quantum computers

Classical optimization algorithms in machine learning often take a long time to compute when applied to a multi-dimensional problem and require a huge amount of CPU and GPU resource. Expand abstract.

Classical

**optimization**algorithms in machine learning often take a long time to compute when applied to a multi-dimensional**problem**and require a huge amount of CPU and GPU resource. Quantum parallelism has a potential to speed up machine learning algorithms. We describe a generic mathematical model to leverage quantum parallelism to speed-up machine learning algorithms. We also apply quantum machine learning and quantum parallelism applied to a $3$-dimensional image that vary with time.5 days ago

8/10 relevant

arXiv

8/10 relevant

arXiv

Model Hierarchy for the Shape **Optimization** of a Microchannel Cooling
System

To demonstrate the feasibility of the reduced models, the

**optimization****problems**are solved numerically with a gradient descent method. Expand abstract. We model a microchannel cooling system and consider the

**optimization**of its shape by means of shape calculus. A three-dimensional model covering all relevant physical effects and three reduced models are introduced. The latter are derived via a homogenization of the geometry in 3D and a transformation of the three-dimensional models to two dimensions. A shape**optimization****problem**based on the tracking of heat absorption by the cooler and the uniform distribution of the flow through the microchannels is formulated and adapted to all models. We present the corresponding shape derivatives and adjoint systems, which we derived with a material derivative free adjoint approach. To demonstrate the feasibility of the reduced models, the**optimization****problems**are solved numerically with a gradient descent method. A comparison of the results shows that the reduced models perform similarly to the original one while using significantly less computational resources.7 days ago

7/10 relevant

arXiv

7/10 relevant

arXiv

A new preconditioner for elliptic PDE-constrained **optimization** **problems**

We propose a preconditioner to accelerate the convergence of the GMRES iterative method for solving the system of linear equations obtained from discretize-then-optimize approach applied to optimal control

**problems**constrained by a partial differential equation. Expand abstract. We propose a preconditioner to accelerate the convergence of the GMRES iterative method for solving the system of linear equations obtained from discretize-then-optimize approach applied to optimal control

**problems**constrained by a partial differential equation. Eigenvalue distribution of the preconditioned matrix as well as its eigenvectors are discussed. Numerical results of the proposed preconditioner are compared with several existing preconditioners to show its efficiency.9 days ago

7/10 relevant

arXiv

7/10 relevant

arXiv

Nonsmooth **Optimization** over Stiefel Manifold: Riemannian Subgradient
Methods

Nonsmooth Riemannian optimization is a still under explored subfield of manifold optimization. Expand abstract.

Nonsmooth Riemannian

**optimization**is a still under explored subfield of manifold**optimization**. In this paper, we study**optimization****problems**over the Stiefel manifold with nonsmooth objective function. This type of**problems**appears widely in the engineering field. We propose to address these**problems**with Riemannian subgradient type methods including: Riemannian full, incremental, and stochastic subgradient methods. When the objective function is weakly convex, we show iteration complexity ${\cal O}(\varepsilon^{-4})$ for these algorithms to achieve an $\varepsilon$-small surrogate stationary measure. Moreover, local linear convergence can be achieved for Riemannian full and incremental subgradient methods if the**optimization****problem**further satisfies the sharpness regularity property. The fundamental ingredient for establishing the aforementioned convergence results is that any locally Lipschitz continuous weakly convex function in the Euclidean space admits a Riemannian subgradient inequality uniformly over the Stiefel manifold, which is of independent interest. We then extend our convergence results to a broader class of compact Riemannian manifolds embedded in Euclidean space. Finally, as a demonstration of applications, we discuss the sharpness property for robust subspace recovery and orthogonal dictionary learning and conduct experiments on the two**problems**to illustrate the performance of our algorithms.10 days ago

6/10 relevant

arXiv

6/10 relevant

arXiv

A Molecular Computing Approach to Solving **Optimization** **Problems** via Programmable Microdroplet Arrays

Combinatorial

**optimization****problems**are mapped to an Ising Hamiltonian and encoded in the form of intra- and inter- droplet interactions. Expand abstract.The search for novel forms of computing that show advantages as alternatives to the dominant von-Neuman model-based computing is important as it will enable different classes of

**problems**to be solved. By using droplets and room-temperature processes, molecular computing is a promising research direction with potential biocompatibility and cost advantages. In this work, we present a new approach for computation using a network of chemical reactions taking place within an array of spatially localized droplets whose contents represent bits of information. Combinatorial**optimization****problems**are mapped to an Ising Hamiltonian and encoded in the form of intra- and inter- droplet interactions. The**problem**is solved by initiating the chemical reactions within the droplets and allowing the system to reach a steady-state; in effect, we are annealing the effective spin system to its ground state. We propose two implementations of the idea, which we ordered in terms of increasing complexity. First, we introduce a hybrid classical-molecular computer where droplet properties are measured and fed into a classical computer. Based on the given**optimization**problem, the classical computer then directs further reactions via optical or electrochemical inputs. A simulated model of the hybrid classical-molecular computer is used to solve boolean satisfiability and a lattice protein model. Second, we propose architectures for purely molecular computers that rely on pre-programmed nearest-neighbour inter-droplet communication via energy or mass transfer.10 days ago

10/10 relevant

chemRxiv

10/10 relevant

chemRxiv

Sufficient optimality conditions in bilevel programming

This paper is concerned with the derivation of first- and second-order sufficient optimality conditions for optimistic bilevel

**optimization****problems**involving smooth functions. Expand abstract. This paper is concerned with the derivation of first- and second-order sufficient optimality conditions for optimistic bilevel

**optimization****problems**involving smooth functions. First-order sufficient optimality conditions are obtained by estimating the tangent cone to the feasible set of the bilevel program in terms of initial**problem**data. This is done by exploiting several different reformulations of the hierarchical model as a single-level**problem**. To obtain second-order sufficient optimality conditions, we exploit the so-called value function reformulation of the bilevel**optimization**problem, which is then tackled with the aid of second-order directional derivatives. The resulting conditions can be stated in terms of initial**problem**data in several interesting situations comprising the settings where the lower level is linear or possesses strongly stable solutions.17 days ago

6/10 relevant

arXiv

6/10 relevant

arXiv

A fast two-point gradient algorithm based on sequential subspace
**optimization** method for nonlinear ill-posed **problems**

In this paper, we propose and analyze a fast two-point gradient algorithm for solving nonlinear ill-posed

**problems**, which is based on the sequential subspace**optimization**method. Expand abstract. In this paper, we propose and analyze a fast two-point gradient algorithm for solving nonlinear ill-posed problems, which is based on the sequential subspace

**optimization**method. A complete convergence analysis is provided under the classical assumptions for iterative regularization methods. The design of the two-point gradient method involves the choices of the combination parameters which is systematically discussed. Furthermore, detailed numerical simulations are presented for inverse potential problem, which exhibit that the proposed method leads to a strongly decrease of the iteration numbers and the overall computational time can be significantly reduced.17 days ago

4/10 relevant

arXiv

4/10 relevant

arXiv