Loop-Cluster **Monte** **Carlo** Algorithm for Classical Statistical Models

**Monte**

**Carlo**algorithm which passes back and forth between the Fortuin-Kasteleyn (FK) bond representation of the $q$-state Potts model and the so-called $q$-flow representation. Together with the Swendsen-Wang (SW) cluster method, the LC algorithm couples the spin, FK and $q$-flow representations of the Potts model. As a result, a single Markov-chain simulation can use an arbitrary combination of the SW, worm, LC and other algorithms, and simultaneously sample physical quantities in any representation. Generalizations to real value $q \geq 1$ and to a single-cluster version are also obtained. Investigation of dynamic properties is performed for $q=2$, $3$ on both the complete graph and toroidal grids of dimension $2\leq d\leq 5$. Our numerical results suggest that the LC algorithm and its single-cluster version are in the same dynamic universality class as the SW and the Wolff algorithm, respectively. Finally, it is shown, for the Potts model undergoing continuous phase transition, the $q$-flow clusters, defined as sets of vertices connected via non-zero flow variables, are fractal objects.

10/10 relevant

arXiv

Plateau Proposal Distributions for Adaptive Component-wise Multiple-Try Metropolis

**Monte**

**Carlo**(MCMC) methods are sampling methods that have become a commonly used tool in statistics, for example to perform Monte Carlo integration. Expand abstract.

**Monte**

**Carlo**(MCMC) methods are sampling methods that have become a commonly used tool in statistics, for example to perform

**Monte**

**Carlo**integration. As a consequence of the increase in computational power, many variations of MCMC methods exist for generating samples from arbitrary, possibly complex, target distributions. The performance of an MCMC method is predominately governed by the choice of the so-called proposal distribution used. In this paper, we introduce a new type of proposal distribution for the use in MCMC methods that operates component-wise and with multiple-tries per iteration. Specifically, the novel class of proposal distributions, called Plateau distributions, do not overlap, thus ensuring that the multiple-tries are drawn from different parts of the state space. We demonstrate in numerical simulations that this new type of proposals outperforms competitors and efficiently explores the state-space.

4/10 relevant

arXiv

Using Wasserstein Generative Adversarial Networks for the Design of
**Monte** **Carlo** Simulations

**Monte**

**Carlo**studies do not resemble real data sets and instead reflect many arbitrary decisions made by the researchers. Expand abstract.

**Monte**

**Carlo**studies do not resemble real data sets and instead reflect many arbitrary decisions made by the researchers. As a result potential users of the methods are rarely persuaded by these simulations that the new methods are as attractive as the simulations make them out to be. We discuss the use of Wasserstein Generative Adversarial Networks (WGANs) as a method for systematically generating artificial data that mimic closely any given real data set without the researcher having many degrees of freedom. We apply the methods to compare in three different settings twelve different estimators for average treatment effects under unconfoundedness. We conclude in this example that (i) there is not one estimator that outperforms the others in all three settings, and (ii) that systematic simulation studies can be helpful for selecting among competing methods.

10/10 relevant

arXiv

Self-learning Hybrid **Monte** **Carlo**: A First-principles Approach

**Monte**

**Carlo**(SLHMC) which is a general method to make use of machine learning potentials to accelerate the statistical sampling of first-principles density-functional-theory (DFT) simulations. The trajectories are generated on an approximate machine learning (ML) potential energy surface. The trajectories are then accepted or rejected by the Metropolis algorithm based on DFT energies. In this way the statistical ensemble is sampled exactly at the DFT level for a given thermodynamic condition. Meanwhile the ML potential is improved on the fly by training to enhance the sampling, whereby the training data set, which is sampled from the exact ensemble, is created automatically. Using the examples of $\alpha$-quartz crystal SiO$_2^{}$ and phonon-mediated unconventional superconductor YNi$_2^{}$B$_2^{}$C systems, we show that SLHMC with artificial neural networks (ANN) is capable of very efficient sampling, while at the same time enabling the optimization of the ANN potential to within meV/atom accuracy. The ANN potential thus obtained is transferable to ANN molecular dynamics simulations to explore dynamics as well as thermodynamics. This makes the SLHMC approach widely applicable for studies on materials in physics and chemistry.

10/10 relevant

arXiv

A **Monte** **Carlo** method to estimate cell population heterogeneity

**Monte**

**Carlo**" for estimating mathematical model parameters from snapshot distributions, which is straightforward to implement and does not require cells be assigned to predefined categories. Expand abstract.

**Monte**Carlo" for estimating mathematical model parameters from snapshot distributions, which is straightforward to implement and does not require cells be assigned to predefined categories. Our method is appropriate for systems where observed variation is mostly due to variability in cellular processes rather than experimental measurement error, which may be the case for many systems due to continued improvements in resolution of laboratory techniques. In this paper, we apply our method to quantify cellular variation for three biological systems of interest and provide Julia code enabling others to use this method.

10/10 relevant

bioRxiv

Machine Learning in Least-Squares **Monte** **Carlo** Proxy Modeling of Life
Insurance Companies

**Monte**

**Carlo**(LSMC) method. Expand abstract.

**Monte**

**Carlo**(LSMC) method. The key idea of LSMC is to run only a few wisely selected simulations and to process their output further to obtain a risk-dependent proxy function of the loss. In this paper, we present and analyze various adaptive machine learning approaches that can take over the proxy modeling task. The studied approaches range from ordinary and generalized least-squares regression variants over GLM and GAM methods to MARS and kernel regression routines. We justify the combinability of their regression ingredients in a theoretical discourse. Further, we illustrate the approaches in slightly disguised real-world experiments and perform comprehensive out-of-sample tests.

10/10 relevant

arXiv

Real-frequency Diagrammatic **Monte** **Carlo** at Finite Temperature

**Monte**

**Carlo**technique which yields results directly on the real-frequency axis. We present results for the self-energy $\Sigma(\omega)$ of the doped 32x32 cyclic square-lattice Hubbard model in a non-trivial parameter regime, where signatures of the pseudogap appear close to the antinode. We discuss the behavior of the perturbation series on the real-frequency axis and in particular show that one must be very careful when using the maximum entropy method on truncated perturbation series. The algorithm holds great promise for future application in cases when analytical continuation is difficult and moderate-order perturbation theory may be sufficient to converge the result.

10/10 relevant

arXiv

On the robustness of gradient-based MCMC algorithms

**Monte**

**Carlo**) decays exponentially fast in the degree of mismatch between the scales of the proposal and target distributions, while for the random walk Metropolis (RWM) the decay is linear. Expand abstract.

**Monte**

**Carlo**(MCMC) sampling algorithms. In particular, we focus on robustness of MCMC algorithms with respect to heterogeneity in the target and their sensitivity to tuning, an issue of great practical relevance but still understudied theoretically. We show that the spectral gap of the Markov chains induced by classical gradient-based MCMC schemes (e.g. Langevin and Hamiltonian

**Monte**Carlo) decays exponentially fast in the degree of mismatch between the scales of the proposal and target distributions, while for the random walk Metropolis (RWM) the decay is linear. This result provides theoretical support to the notion that gradient-based MCMC schemes are less robust to heterogeneity and more sensitive to tuning. Motivated by these considerations, we propose a novel and simple to implement gradient-based MCMC algorithm, inspired by the classical Barker accept-reject rule, with improved robustness properties. Extensive theoretical results, dealing with robustness to heterogeneity, geometric ergodicity and scaling with dimensionality, show that the novel scheme combines the robustness of RWM with the efficiency of classical gradient-based schemes. We illustrate with simulation studies how this type of robustness is particularly beneficial in the context of adaptive MCMC, giving examples in which the new scheme gives orders of magnitude improvements in efficiency over state-of-the-art alternatives.

4/10 relevant

arXiv

Robust **Monte**-**Carlo** Simulations in Diffusion-MRI: Effect of the substrate
complexity and parameter choice on the reproducibility of results

**Monte**-

**Carlo**simulations for Diffusion-Weighted MRI. Expand abstract.

**Monte**-

**Carlo**Diffusion Simulations (MCDS) have been used extensively as a ground truth tool for the validation of microstructure models for Diffusion-Weighted MRI. However, methodological pitfalls in the design of the biomimicking geometrical configurations and the simulation parameters can lead to approximation biases. Such pitfalls affect the reliability of the estimated signal, as well as its validity and reproducibility as ground truth data. In this work, we first present a set of experiments in order to study three critical pitfalls encountered in the design of MCDS in the literature, namely, the number of simulated particles and time steps, simplifications in the intra-axonal substrate representation, and the impact of the substrate's size on the signal stemming from the extra-axonal space. The results obtained show important changes in the simulated signals and the recovered microstructure features when changes in those parameters are introduced. Thereupon, driven by our findings from the first studies, we outline a general framework able to generate complex substrates. We show the framework's capability to overcome the aforementioned simplifications by generating a complex crossing substrate, which preserves the volume in the crossing area and achieves a high packing density. The results presented in this work,along with the simulator developed, pave the way towards more realistic and reproducible

**Monte**-

**Carlo**simulations for Diffusion-Weighted MRI.

10/10 relevant

arXiv

Deep neural network approximations for **Monte** **Carlo** algorithms

**Monte**

**Carlo**) approximation scheme without the curse of dimensionality, then this function can also be approximated with DNNs without the curse of dimensionality. Expand abstract.

**Monte**

**Carlo**approximation scheme which can approximate the solution of the PDE under consideration at a fixed space-time point without the curse of dimensionality and, thereafter, to prove that DNNs are flexible enough to mimic the behaviour of the used approximation scheme. Having this in mind, one could aim for a general abstract result which shows under suitable assumptions that if a certain function can be approximated by any kind of (Monte Carlo) approximation scheme without the curse of dimensionality, then this function can also be approximated with DNNs without the curse of dimensionality. It is a key contribution of this article to make a first step towards this direction. In particular, the main result of this paper, essentially, shows that if a function can be approximated by means of some suitable discrete approximation scheme without the curse of dimensionality and if there exist DNNs which satisfy certain regularity properties and which approximate this discrete approximation scheme without the curse of dimensionality, then the function itself can also be approximated with DNNs without the curse of dimensionality. As an application of this result we establish that solutions of suitable Kolmogorov PDEs can be approximated with DNNs without the curse of dimensionality.

10/10 relevant

arXiv