A Trust-Region Method For Nonsmooth Nonconvex **Optimization**

**optimization**

**problems**with emphasis on nonsmooth composite programs where the objective function is a summation of a (probably nonconvex) smooth function and a (probably nonsmooth) convex function. The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions. Therefore, the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods. By adding a safeguard on the stepsizes, the accuracy of the model function is guaranteed. For a class of functions that can be "truncated", an additional truncation step is defined and a stepsize modification strategy is designed. The overall scheme converges globally and we establish fast local convergence under suitable assumptions. In particular, using a connection with a smooth Riemannian trust-region method, we prove local quadratic convergence for partly smooth functions under a strict complementary condition. Preliminary numerical results on a family of $\ell_1$-

**optimization**

**problems**are reported and demonstrate the efficiency of our approach.

5/10 relevant

arXiv

Multiplicative Noise Removal: Nonlocal Low-Rank Model and Its Proximal Alternating Reweighted Minimization Algorithm

**optimization**

**problem**, we propose the PARM algorithm, which has a proximal alternating scheme with a reweighted approximation of its subproblem. Expand abstract.

**optimization**

**problem**resulting from the model. Specifically, we utilize a generalized nonconvex surrogate of the rank function to regularize the patch matrices and develop a new nonlocal low-rank model, which is a nonconvex nonsmooth

**optimization**

**problem**having a patchwise data fidelity and a generalized nonlocal low-rank regularization term. To solve this

**optimization**problem, we propose the PARM algorithm, which has a proximal alternating scheme with a reweighted approximation of its subproblem. A theoretical analysis of the proposed PARM algorithm is conducted to guarantee its global convergence to a critical point. Numerical experiments demonstrate that the proposed method for multiplicative noise removal significantly outperforms existing methods such as the benchmark SAR-BM3D method in terms of the visual quality of the denoised images, and the PSNR (the peak-signal-to-noise ratio) and SSIM (the structural similarity index measure) values.

4/10 relevant

arXiv

Estimating processes in adapted Wasserstein distance

**optimization**

**problems**, pricing and hedging

**problem**s, optimal stopping problems, etc. in a Lipschitz fashion. Expand abstract.

**optimization**problems, pricing and hedging problems, optimal stopping problems, etc. in a Lipschitz fashion. The second main result of this article yields quantitative bounds for the convergence of the adapted empirical measure with respect to adapted Wasserstein distance. Surprisingly, we obtain virtually the same optimal rates and concentration results that are known for the classical empirical measure wrt. Wasserstein distance.

4/10 relevant

arXiv

Stochastic Gauss-Newton Algorithms for Nonconvex Compositional
**Optimization**

**optimization**

**problems**frequently arising in practice. We consider both the expectation and finite-sum settings under standard assumptions. We use both classical stochastic and SARAH estimators for approximating function values and Jacobians. In the expectation case, we establish $\mathcal{O}(\varepsilon^{-2})$ iteration complexity to achieve a stationary point in expectation and estimate the total number of stochastic oracle calls for both function values and its Jacobian, where $\varepsilon$ is a desired accuracy. In the finite sum case, we also estimate the same iteration complexity and the total oracle calls with high probability. To our best knowledge, this is the first time such global stochastic oracle complexity is established for stochastic Gauss-Newton methods. We illustrate our theoretical results via numerical examples on both synthetic and real datasets.

4/10 relevant

arXiv

On Generalization and Acceleration of Randomized Projection Methods for
Linear Feasibility **Problems**

**optimization**problems, these randomized and iterative techniques are gaining popularity among researchers in different domains. In this work, we propose a Generalized Sampling Kaczamrz Motzkin (GSKM) method that unifies the iterative methods into a single framework. In addition to the general framework, we propose a Nesterov type acceleration scheme in the SKM method called as Probably Accelerated Sampling Kaczamrz Motzkin (PASKM). We prove the convergence theorems for both GSKM and PASKM algorithms in the L2 norm perspective with respect to the proposed sampling distribution. Furthermore, from the convergence theorem of GSKM algorithm, we find the convergence results of several well known algorithms like Kaczmarz method, Motzkin method and SKM algorithm. We perform thorough numerical experiments using both randomly generated and real life (classification with support vector machine and Netlib LP) test instances to demonstrate the efficiency of the proposed methods. We compare the proposed algorithms with SKM, Interior Point Method (IPM) and Active Set Method (ASM) in terms of computation time and solution quality. In majority of the

**problem**instances, the proposed generalized and accelerated algorithms significantly outperform the state-of-the-art methods.

4/10 relevant

arXiv

Convex **Optimization** on Functionals of Probability Densities

**optimization**

**problems**result in convex optimization

**problem**s on strictly convex functionals of probability densities. Expand abstract.

**optimization**

**problems**result in convex

**optimization**

**problems**on strictly convex functionals of probability densities. In this note, we study these

**problems**and show conditions of minimizers and the uniqueness of the minimizer if there exist a minimizer.

5/10 relevant

arXiv

Explicit Multi-objective Model Predictive Control for Nonlinear Systems Under Uncertainty

**optimization**step, which is considerably cheaper than the original multi-objective optimization

**problem**. Expand abstract.

**problems**with uncertainty on the initial conditions, and in particular their incorporation into a feedback loop via model predictive control (MPC). In multi-objective optimal control, an optimal compromise between multiple conflicting criteria has to be found. For such problems, not much has been reported in terms of uncertainties. To address this

**problem**class, we design an offline/online framework to compute an approximation of efficient control strategies. This approach is closely related to explicit MPC for nonlinear systems, where the potentially expensive

**optimization**

**problem**is solved in an offline phase in order to enable fast solutions in the online phase. In order to reduce the numerical cost of the offline phase, we exploit symmetries in the control

**problems**. Furthermore, in order to ensure optimality of the solutions, we include an additional online

**optimization**step, which is considerably cheaper than the original multi-objective

**optimization**

**problem**. We test our framework on a car maneuvering

**problem**where safety and speed are the objectives. The multi-objective framework allows for online adaptations of the desired objective. Alternatively, an automatic scalarizing procedure yields very efficient feedback controls. Our results show that the method is capable of designing driving strategies that deal better with uncertainties in the initial conditions, which translates into potentially safer and faster driving strategies.

4/10 relevant

arXiv

Model Reduction Framework with a New Take on Active Subspaces for
**Optimization** **Problems** with Linearized Fluid-Structure Interaction Constraints

**problems**and highlight the benefits of the new approach for constructing an active subspace in both terms of solution optimality and wall-clock time reduction Expand abstract.

**optimization**(MDAO)

**problem**is proposed. The new approach is intertwined with the concepts of adaptive parameter sampling, projection-based model order reduction, and a database of linear, projection-based reduced-order models equipped with interpolation on matrix manifolds, in order to construct an efficient computational framework for MDAO. The framework is fully developed for MDAO

**problems**with linearized fluid-structure interaction constraints. It is applied to the aeroelastic tailoring, under flutter constraints, of two different flight systems: a flexible configuration of NASA's Common Research Model; and NASA's Aeroelastic Research Wing #2 (ARW-2). The obtained results illustrate the feasibility of the computational framework for realistic MDAO

**problems**and highlight the benefits of the new approach for constructing an active subspace in both terms of solution optimality and wall-clock time reduction

8/10 relevant

arXiv

Two-Stage Adjustable Robust Linear **Optimization** with New Quadratic
Decision Rules: Exact SDP Reformulations

**problem**s with uncertain demand that adjustable robust linear

**optimization**

**problems**with QDRs improve upon the affine decision rules in their performance both in the worst-case sense and after simulated realization of the uncertain demand relative... Expand abstract.

**optimization**

**problems**with ellipsoidal uncertainty and show that (affinely parameterized) adjustable robust linear

**optimization**

**problems**with QDRs are numerically tractable by presenting exact semi-definite programming (SDP) reformulations. We then show via numerical experiments on lot-sizing

**problems**with uncertain demand that adjustable robust linear

**optimization**

**problems**with QDRs improve upon the affine decision rules in their performance both in the worst-case sense and after simulated realization of the uncertain demand relative to the true solution.

6/10 relevant

arXiv

Stochastic Online **Optimization** using Kalman Recursion

**optimization**

**problems**. Expand abstract.

**optimization**. We obtain high probability bounds on the cumulative excess risk in an unconstrained setting. The unconstrained challenge is tackled through a two-phase analysis. First, for linear and logistic regressions, we prove that the algorithm enters a local phase where the estimate stays in a small region around the optimum. We provide explicit bounds with high probability on this convergence time. Second, for generalized linear regressions, we provide a martingale analysis of the excess risk in the local phase, improving existing ones in bounded stochastic

**optimization**. The EKF appears as a parameter-free O(d^2) online algorithm that optimally solves some unconstrained

**optimization**

**problems**.

4/10 relevant

arXiv